Monday, March 31, 2008

Configuring LDAP authentication and authorization on the ControlTier server

The Open.ControlTier documentation on configuring LDAP authentication and authorization has not yet been updated for the 3.1 release and so only covers configuring Workbench, and not support for Jobcenter.

As an interim solution to this omission, this blog entry records the steps necessary to achieve a "state-of-the-art" configuration based on OpenLDAP, which is both useful in itself and also a crucial step toward integrating with Microsoft's Active Directory services which are broadly deployed in larger enterprise infrastructures.

Modest design goal

While it is feasible to exploit LDAP authentication and authorization "pervasively" across all nodes upon which the various ControlTier components are installed, what is documented here is the more modest design goal of using LDAP to secure access only to the centralized ControlTier server conventionally deployed to provide a single point of administration in the network.

This is a practical compromise when you consider that more often than not, command execution on remote client systems is tied to one or more system level "application" accounts as opposed to individual user's logins. These accounts are used to construct the network of public key based passwordless secure shell access from the ControlTier server.

Comprehensive authentication and authorization for ControlTier is therefore achieved at two levels:
  1. At the system level, login access to the server and client systems must be restricted to the set of individuals authorized to use the ControlTier and "application" accounts that provide unfettered access to executing build and deployment commands in the distributed infrastructure.
  2. At the project level, access to the Workbench model, and Jobcenter command interface must be filtered by the user and role-based authentication and authorization scheme intrinsic to those applications.
It is in the latter case that this posting covers using LDAP to manage levels of access to ControlTier's web based services.

Deploying an LDAP instance

You can skip this section if you have an LDAP server available on your network that is accessible from the ControlTier server.

Assuming such a service does not already exist, the first step is to setup an LDAP server instance on a system that is accessible to the ControlTier server. There are many LDAP server implementations available, but here's how to setup the most popular Open Source version: OpenLDAP.

The OpenLDAP Quick Start Guide proposes building the officially released software from source. There are a number of binary distributions available on the Internet, of course, and many Unix variant OSes package OpenLDAP with their releases.

In this case, I used a CentOS 4.5 instance.

These instructions assume you wish to configure and deploy a non-superuser based LDAP server instance to support ControlTier:
  • Acquire, or build OpenLDAP from source. In this case, the software is built from source and installed under $CTIER_ROOT/pkgs to facilitate executing as the ControlTier server account (e.g. "ctier"):
    $ cd $CTIER_ROOT/src
    $ tar zxf openldap-2.4.8.tgz
    $ cd openldap-2.4.8
    $ ./configure --prefix=$CTIER_ROOT/pkgs/openldap-2.4.8
    Configuring OpenLDAP 2.4.8-Release ...
    checking build system type... i686-pc-linux-gnu
    checking host system type... i686-pc-linux-gnu
    checking target system type... i686-pc-linux-gnu
    .
    .
    .
    Making servers/slapd/overlays/statover.c
    Add seqmod ...
    Add syncprov ...
    Please run "make depend" to build dependencies
    $ make depend
    .
    .
    .
    $ make
    .
    .
    .
    $ make install
    .
    .
    .
    $ file $CTIER_ROOT/pkgs/openldap-2.4.8/libexec/slapd
    .../slapd: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), stripped
  • Customize the "slapd.conf" configuration file (in this case using the "controltier.com" domain):
    $ cd $CTIER_ROOT/pkgs/openldap-2.4.8/etc/openldap
    $ diff slapd.conf slapd.conf.orig
    54,55c54,55
    < suffix "dc=controltier,dc=com"
    < rootdn "cn=Manager,dc=controltier,dc=com"
    ---
    > suffix "dc=my-domain,dc=com"
    > rootdn "cn=Manager,dc=my-domain,dc=com"

  • Start the LDAP server on a non-privileged port:
    $ $CTIER_ROOT/pkgs/openldap-2.4.8/libexec/slapd -h ldap://*:3890/
  • Check that the server is up and running:

    $ $CTIER_ROOT/pkgs/openldap-2.4.8/bin/ldapsearch -h localhost -p 3890 -x -b '' -s base '(objectclass=*)' namingContexts
    # extended LDIF
    #
    # LDAPv3
    # base <> with scope baseObject
    # filter: (objectclass=*)
    # requesting: namingContexts
    #

    #
    dn:
    namingContexts: dc=controltier,dc=com

    # search result
    search: 2
    result: 0 Success

    # numResponses: 2
    # numEntries: 1

One thing to note is that the Elements module library contains an OpenLDAP module that can be used to facilitate management of the LDAP instance. Here's sample project object XML to configure a OpenLDAP instance for use with the setup described above:
<project>
<deployment type="OpenLDAP" name="openLDAP" description="Sample Open LDAP service object" installRoot="${env.CTIER_ROOT}/pkgs/openldap-2.4.8" basedir="${env.CTIER_ROOT}/pkgs/openldap-2.4.8" startuprank="1">
<referrers replace="false">
<resource type="Node" name="localhost"/>
</referrers>
</deployment>
</project>

... and sample command output:
$ ad -p TestProject -t OpenLDAP -o openLDAP -c Stop
running command: assertServiceIsDown
Running handler command: stopService
stopService: openLDAP OpenLDAP on localhost stopped.
[command.timer.OpenLDAP.stopService: 0.565 sec]
true. Execution time: 0.565 sec
[command.timer.Service.Stop: 2.998 sec]
command completed successfully. Execution time: 2.998 sec
$ ad -p TestProject -t OpenLDAP -o openLDAP -c Start
running command: assertServiceIsUp
Running handler command: startService
startService: openLDAP OpenLDAP on localhost started.
[command.timer.OpenLDAP.startService: 0.146 sec]
true. Execution time: 0.146 sec
[command.timer.Service.Start: 2.185 sec]
command completed successfully. Execution time: 2.185 sec
$ ad -p TestProject -t OpenLDAP -o openLDAP -c Status
running assertServiceIsUp command
assertServiceIsUp: /proc/4842 found. openLDAP OpenLDAP on localhost is up.
[command.timer.Service.Status: 2.017 sec]
command completed successfully. Execution time: 2.017 sec

Note that this sample configuration is not particularly sophisticated. There are much more flexible (and secure) ways to deploy OpenLDAP documented on their site.

Populating the directory


Workbench's use of LDAP is pretty straightforward. The Open ControlTier site documents the capabilities of three roles that must exist in the directory:

user - readonly access
admin - can create object
architect - can create objects and create types

Note that both administration and architect users should also be assigned the user role since some elements of the UI assume this (e.g. checks for user role membership are embedded in some of the JSPs).

Note also, that only users with assigned both the admin and architect roles can create new projects.

Please ignore the sample LDIF file on Open.ControlTier, and use the following file as a guideline to structuring your directory:

$ cat users.ldif
# Define top-level entry:
dn: dc=controltier,dc=com
objectClass: dcObject
objectClass: organization
o: ControlTier, Inc.
dc: controltier

# Define an entry to contain users:
dn: ou=users,dc=controltier,dc=com
objectClass: organizationalUnit
ou: users

# Define some users:
dn: cn=user1, ou=users,dc=controltier,dc=com
userPassword: password
objectClass: person
sn: A user account with simple user privileges
cn: user1

dn: cn=user2, ou=users,dc=controltier,dc=com
userPassword: password
objectClass: person
sn: A user account with user and administrator privileges
cn: user2

dn: cn=user3, ou=users,dc=controltier,dc=com
userPassword: password
objectClass: person
sn: A user account with user, administrator and architect privileges
cn: user3

dn: cn=default, ou=users,dc=controltier,dc=com
userPassword: default
objectClass: person
sn: The default account for the ControlTier client to use
cn: default

dn: ou=roles, dc=controltier,dc=com
objectClass: organizationalUnit
ou: roles

dn: cn=architect, ou=roles,dc=controltier,dc=com
objectClass: groupOfUniqueNames
uniqueMember: cn=user3,ou=users,dc=controltier,dc=com
cn: architect

dn: cn=admin, ou=roles,dc=controltier,dc=com
objectClass: groupOfUniqueNames
uniqueMember: cn=user2,ou=users,dc=controltier,dc=com
uniqueMember: cn=user3,ou=users,dc=controltier,dc=com
uniqueMember: cn=default,ou=users,dc=controltier,dc=com
cn: admin

dn: cn=user, ou=roles,dc=controltier,dc=com
objectClass: groupOfUniqueNames
uniqueMember: cn=user1,ou=users,dc=controltier,dc=com
uniqueMember: cn=user2,ou=users,dc=controltier,dc=com
uniqueMember: cn=user3,ou=users,dc=controltier,dc=com
cn: user
Here's the command used to load the records into OpenLDAP:
$ ldapadd -x -H ldap://localhost:3890/ -D "cn=Manager,dc=controltier,dc=com" -w secret -f users.ldif
You can see that it is important to use OS access controls to safeguard the contents of this file from unauthorized access.

Note that you can supplement OpenLDAP's command line interface with JXplorer, an Open Source Java LDAP browser/editor client application.

Configuring Workbench to use LDAP

The next piece of the puzzle is to adjust Tomcat's security "Realm" configuration to use the LDAP server. All that's necessary is to replace the default "UserDatabaseRealm" element in "server.xml" with the following "JNDIRealm" setup:
<Realm className="org.apache.catalina.realm.JNDIRealm" debug="99"
connectionURL="ldap://localhost:3890/"
roleBase="ou=roles,dc=controltier,dc=com"
roleName="cn"
roleSearch="uniqueMember={0}"
userPattern="cn={0},ou=users,dc=controltier,dc=com"/>

This configuration specifies the connection URL to the LDAP server, matches the role base and user pattern to the repository structure (you may need to adjust these for your own repository), and uses the "bind method" of authentication described in the Tomcat 4 documentation.

Before restarting Tomcat, a final piece of configuration will make Workbench user management available from the Administration page. Edit the "auth.properties" file to switch from "default" to "jndi" authentication and authorization:
$ cat $CATALINA_BASE/webapps/itnav/WEB-INF/classes/auth.properties
######################################
# auth.properties
# This is the configuration properties file for the User Management feature.
####
# ngps.workbench.auth.type=default
ngps.workbench.auth.type=jndi

######################################
# To enable User Management with JDNI authorization, set the value of ngps.workbench.auth.type to jndi
# then fill in the JNDI configuration below.
######################################
# Configuration for JNDI authorization:
####

ngps.workbench.auth.jndi.connectionName=cn=Manager,dc=controltier,dc=com
ngps.workbench.auth.jndi.connectionPassword=secret
ngps.workbench.auth.jndi.connectionUrl=ldap://localhost:3890/
ngps.workbench.auth.jndi.roleBase=ou=roles,dc=controltier,dc=com
ngps.workbench.auth.jndi.roleNameRDN=cn
ngps.workbench.auth.jndi.roleMemberRDN=uniqueMember
ngps.workbench.auth.jndi.userBase=ou=users,dc=controltier,dc=com
ngps.workbench.auth.jndi.userNameRDN=cn

(Note that with an embedded password this is another file to safeguard with OS access control).

Once JNDI user management is enabled, it is possible to use Workbench user administration to restrict access to individual projects on a user by user basis as well as adjust each user's role assignments:


Configuring WebDAV to use LDAP

Since the ControlTier WebDAV repository is deployed to the same Tomcat instance as Workbench it shares the same authentication realm. Not only is it prudent to protect the WebDAV from general browser based access (e.g. by limiting which users can modify the repository), but, just as importantly, the Antdepo client requires access to the repository to upload packages and download packages and modules.

Tomcat 4.1 includes the Apache Slide WebDAV implementation. Slide security is documented in some detail here. Fine grained access control can be configured both to individual resources and methods. However, from ControlTier's perspective, establishing basic authorization for "admin" role members by adding the following entries to "$CATALINA_BASE/webapps/webdav/WEB-INF/web/xml" and restarting Tomcat is sufficient:
<security-constraint>
<web-resource-collection>
<web-resource-name>Administrative</web-resource-name>
<url-pattern>/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>admin</role-name>
</auth-constraint>
</security-constraint>

<login-config>
<auth-method>BASIC</auth-method>
<realm-name>JNDIRealm</realm-name>
</login-config>

Note that as of ControlTier 3.1.4, enabling WebDAV authorization and authentication reveals a bug in the Package module's "upload" command's use of the WebDAV "put" Ant task. The workaround is to fall back to the "scp"-based method of uploading packages to the WebDAV.

Configuring Jobcenter to use LDAP

Jobcenter LDAP configuration is modeled on Workbench's JNDI provider and implemented as a standard JAAS LoginModule integrated with Jobcenter's Jetty web application container.

Note: that you must have installed at least ControlTier 3.1.4 to follow these Jobcenter configuration instructions!
  • Modify $JOBCENTER_HOME/bin/start-jobcenter.sh script to specify "jaas-jndi.conf" in place of "jaas.conf" (this specifies the use of the "org.antdepo.webad.jaas.JNDILoginModule" JAAS login module class instead of the standard "org.antdepo.webad.jaas.PropertyFileLoginModule").
  • Modify "$JOBCENTER_HOME/webapps/jobcenter/WEB-INF/jaas-jndi.properties". This file has similar configuration properties to the auth.properties used in
    workbench for JNDI authentication/authorization. The "connectionPassword", and "connectionUrl" should be modified as necessary. Other properties should be left alone unless the structure of the LDAP directory differs from that setup above:
    jobcenter.auth.jndi.connectionName=cn=Manager,dc=controltier,dc=com
    jobcenter.auth.jndi.connectionPassword=secret
    jobcenter.auth.jndi.connectionUrl=ldap://localhost:3890/
    jobcenter.auth.jndi.roleBase=ou=roles,dc=controltier,dc=com
    jobcenter.auth.jndi.roleNameRDN=cn
    jobcenter.auth.jndi.roleMemberRDN=uniqueMember
    jobcenter.auth.jndi.userBase=ou=users,dc=controltier,dc=com
    jobcenter.auth.jndi.userNameRDN=cn

Note that, as of ControlTier 3.1, Jobcenter has no intrinsic mechanism to manage authorization rights for job creation, modification or deletion. This means that anyone who has access to the Jobcenter console can change any job's configuration (even if they don't have the right to execute them). This applies to both scheduled and on-demand jobs. This functional gap will be dealt with in a future enhancement.

Controlling Jobcenter command execution authorization with Antdepo


The right of a user to execute a job from Jobcenter is synonymous with their underlying Antdepo authorization - Jobcenter literally exploits the Antdepo access control mechanism.

Antdepo access control is based on configuring the "$ANTDEPO_BASE/etc/acls.xml" file. The following DTD and default acls.xml show the scope for customizing authorization levels:
$ cat acls.dtd
<!ELEMENT accessto ( command ) >

<!ELEMENT acl ( accessto, by, using, when ) >
<!ATTLIST acl description CDATA #REQUIRED >

<!ELEMENT acls ( acl* ) >

<!ELEMENT by ( role ) >

<!ELEMENT command EMPTY >
<!ATTLIST command module CDATA #REQUIRED >
<!ATTLIST command name CDATA #REQUIRED >

<!ELEMENT context EMPTY >
<!ATTLIST context name CDATA #REQUIRED >
<!ATTLIST context type CDATA #REQUIRED >
<!ATTLIST context depot CDATA #REQUIRED >

<!ELEMENT role EMPTY >
<!ATTLIST role name NMTOKEN #REQUIRED >

<!ELEMENT timeandday EMPTY >
<!ATTLIST timeandday day CDATA #REQUIRED >
<!ATTLIST timeandday hour CDATA #REQUIRED >
<!ATTLIST timeandday minute CDATA #REQUIRED >

<!ELEMENT using ( context ) >

<!ELEMENT when ( timeandday ) >

$ cat acls.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE acls SYSTEM "file:///home/ctier/ctier/antdepo/etc/acls.dtd">

<acls>
<acl description="admin, access to any command using any context at anytime">
<accessto>
<command module="*" name="*"/>
</accessto>
<by>
<role name="admin"/>
</by>
<using>
<context depot="*" type="*" name="*"/>
</using>
<when>
<timeandday day="*" hour="*" minute="*"/>
</when>
</acl>
</acls>

Antdepo client configuration

Finally, every Antdepo client installation both local and remote from the ControlTier server requires access to both Workbench and the WebDAV. The sample LDIF above specifies a user called "default" with the password "default" which has the "admin" role. This is the client framework account specified in "$ANTDEPO_BASE/etc/framework.properties":
framework.server.username = default
framework.server.password = default
framework.webdav.username = default
framework.webdav.password = default
Naturally you are at liberty (and it is probably advisable) to change this account name and password (they are specified at installation time in "defaults.properties). You should the protect the "framework.properties" file using OS authorization mechanisms.

QED

Anthony Shortland
anthony@controltier.com

Friday, March 28, 2008

Making a clear distinction between a node object's name and "hostname" attribute

We've just posted ControlTier 3.1.1 to Sourceforge.

The release includes ControlTier installer, Antdepo client and Commander extension enhancements that allow a clear distinction to be drawn between the meaning and use of node object names and their "hostname" attribute value.

Until this release, it has been the convention for the "client.hostname" property used by the installer to be used to set:
  1. The "framework.node" property value for Antdepo
  2. The Node object name registered by depot-setup
  3. The Node object's "hostname" attribute also set by depot-setup in the project
With this release, the assignments are made as follows:
  1. "client.hostname" is used to set "framework.node" and the Node object's "hostname" attribute.
  2. A new installer property, "client.node.name", is used to set the Node object name.
(Note that both installer properties default to "${server.tomcat.hostname}" - which in turn is set to "localhost" - to maintain backward compatibility).

Drawing this distinction has a number of benefits.

From the project perspective, it is now possible to register Node objects with logically relevant names. i.e. Instead of a Node object being named using the (possibly user name qualified) network host name used to access it (e.g. "prd@w01d02.company.com"), it can use an application centric functional name (e.g. "web-server-01", or "build-box", etc).

A second benefit - and the one that really drove the timing of this change - is that it was difficult to exploit the "user@host" support introduced for "client.hostname" with 3.1 since the ProjectBuilder object XML ("projectxml") needs to translate all object names into XML element names as part of the "load-objects" command, and XML does not support element names that include the "@" character.

Thus, you can see that this enhancement tidies up a loose end in the 3.1 functionality. (Note that theses ideas are going to be further enhanced and extended under 3.2).

Here's a practical example of taking a single node (called "development") and installing the ControlTier server under one account ("anthony"), and establishing two client installations on the same system under separate accounts ("user1" and "user2").

The three "default.properties" files are setup as follows:
[anthony@development ControlTier-3.1]$ diff default.properties default.properties.orig
31c31
< server.tomcat.hostname = development
---
> server.tomcat.hostname = localhost
98c98
< client.hostname = anthony@development
---
> client.hostname = ${server.tomcat.hostname}
103c103
< client.node.name = server
---
> client.node.name = ${server.tomcat.hostname}

[user1@development ControlTier-3.1]$ diff default.properties default.properties.orig
31c31
< server.tomcat.hostname = development
---
> server.tomcat.hostname = localhost
98c98
< client.hostname = user1@development
---
> client.hostname = ${server.tomcat.hostname}
103c103
< client.node.name = client-1
---
> client.node.name = ${server.tomcat.hostname}

[user2@development ControlTier-3.1]$ diff default.properties default.properties.orig
31c31
< server.tomcat.hostname = development
---
> server.tomcat.hostname = localhost
98c98
< client.hostname = user2@development
---
> client.hostname = ${server.tomcat.hostname}
103c103
< client.node.name = client-2
---
> client.node.name = ${server.tomcat.hostname}

(Note that in all cases the "server.tomcat.hostname" is setup to the host name of the ControlTier server system - no user name qualification required of course).

With the installer configured this way, Node objects are registered in Workbench as follows:

Finally, note how node object attributes can be subsequently updated using object XML, in this example setting custom descriptions:
<?xml version="1.0"?>

<!DOCTYPE project PUBLIC "-//ControlTier Software Inc.//DTD Project Document 1.0//EN" "project.dtd">

<project>
<node type="Node" name="server" description="ControlTier server node" hostname="anthony@development"/>
<node type="Node" name="client-1" description="ControlTier client node" hostname="user1@development"/>
<node type="Node" name="client-2" description="ControlTier client node" hostname="user2@development"/>
</project>

Anthony Shortland.
anthony@controltier.com

Thursday, March 27, 2008

ControlTier is a service deployment automation solution for SaaS providers

Damon and I just spent two days manning the ControlTier exhibit at the 2008 SaaSCon at the Santa Clara convention center.


Conferences like this are always a bit a "busman's holiday" for me presenting the chance to sell ControlTier as opposed to my normal job of delivering ControlTier solutions!

The attendees split fairly evenly into SaaS providers and SaaS users, and we quickly realized that our best approach was to sort one from the other and focus on promoting ControlTier as a solution for SaaS providers - something complementary to the other competencies and services they seek to buy in such as hosting, etc.

The first and most obvious fit for ControlTier was merely recognizing that SaaS delivery application development and deployment is just the latest generation of complex, multi-tier web based applications - only more so.

These businesses have an even greater motivation to exploit the reliability, sophistication, scaling and efficiencies that technical process automation with ControlTier brings.

The second observation is that with the advent of SaaS companies, business and technical process automation has converged, eliminating all the manual steps between service sales and delivery.

There is a real opportunity for ControlTier to play a direct (rather than supporting) role in service delivery in the case of a SaaS provider that needs to deploy new application services (e.g. a VM and any of the components above it including a real or virtual web server) in order to deliver their service.

Anthony Shortland.
anthony@controltier.com

Tuesday, March 25, 2008

Upgrading the ControlTier Framework

Our installation documentation does a good job of describing how to establish a new ControlTier infrastructure, but there are a couple of considerations when upgrading an existing setup. Here's a reformatted version of some notes I originally posted to the ControlTier Google Group:

The process of "upgrading in place" has improved significantly over recent releases and I thought I'd document the steps here for anyone contemplating taking it on for an existing project:
  • Preparation

    Please take the precaution of backing up your $CTIER_ROOT directory, committing any ProjectBuilder source changes you might have, and using the "Create Archive" function of the Workbench administration page to safeguard a copy of your project(s) before starting.
  • Software upgrade

    After shutting down Workbench and Jobcenter, upgrade the ControlTier software on your ControlTier server following the installation process documented on Open.ControlTier. Many of the steps will be redundant, but you can still follow the process to safely overwrite and supplement your existing installation. Make sure you make any site-specific "default.properties" changes that may be necessary. Once the installation process is complete, you'll have an updated ".ctierrc" in place, so make sure to restart Workbench and Jobcenter from a new shell. Follow the client-only software installation process to upgrade any client systems you may have in the same manner.
  • Base seed upgrade

    Each release of the ControlTier software comes with an updated set of base modules. These are automatically utilized when new projects are created, but existing projects need to be explicitly upgraded. The software upgrade process places the base seed archive on the ControlTier server's WebDAV at http://localhost:8080/webdav/seeds (or wherever you're running the server). With your project selected in Workbench, use the administration page's "Import Seed" function to load the "base-seed.jar" file. You'll need to repeat this for each active project in Workbench. (Note that you can also use ProjectBuilder's "load-library" command to upgrade the base seed).
  • Module upgrade

    The final step is to bring the server and client nodes' Antdepo module cache into line with the newly updated project(s). The cleanest way to do this currently is:

    1. Remove the depot's module library cache (e.g. "rm -rf $ANTDEPO_BASE/depots/myproject/lib").
    2. Re-create the project's depot structure (e.g. "depot-setup -p myproject -a create").
    3. Re-install the project's deployments (e.g. "depot-setup -p myproject -a install").

    Note that this approach leaves your project's deployment working files (under "$ANTDEPO_BASE/depots/myproject/deployments") completely untouched. In fact you can safely apply this upgrade while your application deployments are running.
Anthony Shortland.
anthony@controltier.com

Settings, attributes and option default values

The ControlTier 3.1 framework includes a pretty elegant set of rules and conventions for managing the options used in command handlers. Whereas prior to 3.1 the delivery of model data values into a particular command context was fairly unstructured, there is now an ordered set of steps to achieve this that offers the predictability of an established order of precedence.

Reviewing the following excerpt from a module's type XML:
<?xml version="1.0" encoding="UTF-8"?>

<!--
type-xml.generated.date: Wed Mar 07 11:53:24 PST 2007
type-xml.generator.class: com.controltier.shared.cli.ConvertTypeRDFToXML
type-xml.author: $Author$
type-xml.id: $Id$
-->

<types xmlns:cmd="http://open.controltier.com/base/Modules/Commands#" xmlns:module="http://open.controltier.com/base/Modules#" xmlns:type="http://open.controltier.com/base/Types#">
.
.
.

<type role="concrete" uniqueInstances="true" name="HsqldbRdbJavaHome" order="Setting">
<description>JAVA_HOME to use to run a Hypersonic SQL database server instance</description>
<supertype>
<typereference name="HsqldbRdbSetting"/>
</supertype>
<attributes>
<attribute name="hsqldbRdbJavaHome" type-property="settingValue"/>
</attributes>
</type>

.
.
.

<type role="concrete" uniqueInstances="true" name="HsqldbRdb" order="Service">
<description>A Hypersonic SQL database service</description>

<supertype>
<typereference name="Rdb"/>
</supertype>

<attributes>
<attribute-default name="hsqldbRdbJavaHome" value="${env.JAVA_HOME}"/>
.
.
.
</attributes>

<constraints>
<dependency-constraint enforced="false" kind="child">
<allowedtypes>
.
.
.
<typereference name="HsqldbRdbJavaHome"/>
.
.
.
</allowedtypes>
<singletontypes>
.
.
.
<typereference name="HsqldbRdbJavaHome"/>
.
.
.
</singletontypes>
.
.
.
</constraints>
.
.
.
<commands>
.
.
.
<command name="startService" description="Start the Hypersonic SQL database server instance" command-type="AntCommand">
<opts>
.
.
.
<opt parameter="javahome" description="Value of JAVA_HOME" required="false" property="opts.javahome" defaultproperty="entity.attribute.hsqldbRdbJavaHome" default="${env.JAVA_HOME}" type="string"/>
</opts>
</command>
.
.
.
</commands>
</type>
</types>

... starting from the startService command definition:
  • By convention, the goal is to make every configurable property used by a command's implementation available via an option definition.
  • Each option's property (in this case "property=opts.javahome") contains a value potentially provided from one of a number of sources in the following order of precedence:
  1. The value can be provided via the command line when the command is executed, otherwise ...
  2. The value will be initialized from a default property (i.e. defaultproperty="entity.attribute.hsqldbRdbJavaHome") if it is specified and exists, or ...
  3. The default value specified as an option attribute (default="${env.JAVA_HOME}") will be used.
  • If none of these sources are satisfied the option property is left unset, or, if the "required" attribute is set to "true", command execution fails.
This said, note how the default property is assigned using an "entity attribute" property value which arises from the Workbench project (i.e. the type/object model) as follows:
  • The properties intrinsic to each type can be assigned a general attribute name.
  • By convention, a setting type's "value" is assigned such a name (in this case the "HsqldbRdbJavaHome" setting type defines the "hsqldbRdbJavaHome" attribute naming the setting's value.
  • Adding the setting as an allowed singleton resource of the deployment type whose command implementation(s) wish to exploit it makes an attribute value available to objects of that type (see the constraints tab of the type page, and the properties tab of the object page for each module for the list of entity attributes).
Finally, note that entity attribute properties constructed in this fashion can receive values in the following ways:
  1. A resource relationship is setup and the entity attribute value is imported from the setting object.
  2. No such relationship is established and a so called "type level default attribute" value defined directly in the type.xml is used instead (the "attribute-default" value of "${env.JAVA_HOME}" in this case).
So, in summary, the value of "opts.javahome" in the example will come from one of the following sources in order of precedence:
  1. A value provided on the command line when the command is executed.
  2. A HsqldbRdbJavaHome setting value (from a child resource).
  3. The type level attribute default value allowed by the HsqldbRdb type's constraints.
  4. In cases where no type level attribute default value exists, the default value specified on the command's option definition (redundant in this case).
Anthony Shortland.
anthony@controltier.com

Here comes CTL -- a new automation tool with deep roots

We're happy to announce the initial release of our latest project, CTL.

What is CTL?
CTL is a flexible distributed control dispatching framework that enables you to break management processes into reusable control modules and execute them in distributed fashion over the network.

What does CTL do?
CTL helps you leverage your current scripts and tools to easily automate any kind of distributed systems management or application provisioning task. Its good for simplifiying large-scale scripting efforts or as another tool in your toolbox that helps you speed through your daily mix of ad-hoc administration tasks.

What are CTL's features?
CTL has many features, but the general highlights are:

* Execute sophisticated procedures in distributed environments - Aren't you tired of writing and then endlessly modifying scripts that loop over nodes and invoke remote actions? CTL dispatches actions to remote controllers with network transparency (over SSH), parallelism, and error handling already built in.

* Comes with pre-built utilities - CTL comes with pre-built utilities so you don't have to script actions like file distribution or process and port checking.

* Define your own automation using the tools/languages you already know - New controller modules are defined in XML and your scripting can be done in multiple scripting languages (Perl, Python, etc.), *nix shell, Windows batch, and/or Ant.

* Cross platform administration - CTL is Java-based, works on *nix and Windows.

What is CTL's relationship to other ControlTier projects?
AntDepo and CTL share the same code roots. CTL is a re-factoring and enhancement of the original AntDepo code base. While AntDepo is primarily used as a component within the larger ControlTier Application Service Provisioning System, CTL is designed to be fully usable as a standalone tool.

In the future, CTL will replace AntDepo within the ControlTier Application Service Provisioning System.

If you are new to ControlTier's automation tools, CTL is definitely where you want to start.

As always, please jump on the mailing list to tell us what you think, ask questions, or provide feedback. We always love hearing from our users (seriously!).

EDIT:
1. For more information about the motivation behind the CTL tools, check out this post on the dev2ops.org blog.

2. All information on how to use CTL can now be found on the ControlTier Wiki.

Wednesday, March 19, 2008

Converting Settings for use with ProjectBuilder

I recently posted the procedure for converting modules to ProjectBuilder. This handles any Deployment and Package sub-types you may have in your Workbench project, but there is still the question of what to do about project specific Setting sub-types (Nodes are handled implicitly by depot-setup, of course).

As it turns out, there's no method of automatically converting Setting sub-types, just a set of rules and conventions we follow to manually add settings definitions to existing type.xml files:
  • By convention settings type definitions are added to the type.xml of the module that most logically "owns" them (i.e. usually the module that uses them).
  • Also by convention, we group related sets of settings as sub-types of an abstract "container" type (though this is certainly not required), so the first type to add might be something like:
    <type role="abstract" uniqueInstances="true" name="JBossSetting" order="Setting">
    <description>A JBoss Setting</description>

    <supertype>

    <typereference name="Setting"/>

    </supertype>

    <constraints>

    <dependency-constraint enforced="false" kind="parent">

    <allowedtypes>

    <typereference name="JBossServer"/>

    </allowedtypes>

    </dependency-constraint>

    </constraints>

    </type>

  • Note that the only type attributes allowed for settings are "settingType", "settingValueEncrypted", "settingValue", and "referrers". This dictates what you can include in the attributes and constraints section of the type definition.
  • Next, add each setting type to the type.xml. e.g:
    <type role="concrete" uniqueInstances="true" name="JBossJavaHome" order="Setting">
    <description>JAVA_HOME for JBoss</description>
    <supertype>
    <typereference name="JBossSetting"/>
    </supertype>
    <attributes>
    <attribute name="javaHome" type-property="settingValue"/>
    </attributes>
    <constraints>
    <dependency-constraint enforced="false" kind="parent">
    <allowedtypes>
    <typereference name="JBossServer"/>
    </allowedtypes>
    </dependency-constraint>
    </constraints>
    </type>

One thing to bear in mind is that constraint references to types that are defined in separate modules (most often the case with Setting sub-types) are resolved only when all the types are loaded into the project. Using ProjectBuilder's "load-library" command (or loading a seed from the Workbench administration page) is the most obvious way to achieve this without the order in which modules are added to the project being significant.

Anthony Shortland,
anthony@controltier.com

Friday, March 07, 2008

An example JBoss build and deployment project

[Edit: also see the demo section of the ControlTier Wiki for an up to date version of this]

(It was when I posted these notes to our Google group that I realized that there's real value in being able to post HTML. With that in mind. I'm re-posting here complete with links and formatting!)

These notes document using the ControlTier Elements Solution Library to build and deploy Sun's "Duke's Bank" sample application (from their J2EE 1.4 Tutorial) to JBoss 4.0.3SP1 following the "Getting Started with JBoss 4.0" guide. I've tested the project on Windows 2003 Server R2 and CentOS 4.5 (and it can also be easily adapted to run on other Unix/Linux platforms). You'll need around 4GB of disk space and 1GB of memory on a reasonably up-to-date system.

The basic idea was to implement the instructions from chapter 4 as a Workbench project building and deploying the sample application to a development environment on the (single) node (localhost) that's also running Workbench and Jobcenter.

1) Install
  • Install the latest version of the Java 1.5 SDK available from Sun into "$CTIER_ROOT/pkgs" (for use both by ControlTier - instead of Java 1.4 - and the Dukes Bank application).
  • Create a new ControlTier project called "DukesBank" (JBoss 4.0 J2EE 1.4 "Duke's Bank" sample application project) using Workbench.
  • Download the Elements Solution Library 1.0 seed from Sourceforge and "Import" it using the Workbench administration page (note that you can also check out the library's source and use ProjectBuilder to build and load the library).
2) Configure
  • Configure a ProjectBuilder object and download from Sourceforge and load into Workbench either the Linux or Windows version of the objects as required (note that on Unix/Linux it is assumed installation will occur to "~/dukesbank", while on Windows to "C:\dukesbank") by cutting and pasting the appropriate commands:

    $ ad -p DukesBank -m Deployment -c Register -- -name elements -type ProjectBuilder -basedir $CTIER_ROOT/src/elements -installroot $CTIER_ROOT/target/elements -install
    .
    .
    .
    For more information about this object run: ad -p DukesBank -t ProjectBuilder -o elements -c Get-Properties -- -print
    [command.timer.Deployment.Register: 4.150 sec]

    $ ad -p DukesBank -t ProjectBuilder -o elements -c load-objects -- -format projectxml -filename $CTIER_ROOT/src/elements/demo/DukesBank/objects/linux.xml
    Loading "/home/demo/ctier/src/elements/demo/DukesBank/objects/linux.xml" ...
    1 file(s) have been successfully validated.
    Processing /home/demo/ctier/src/elements/demo/DukesBank/objects/linux.xml to /home/demo/ctier/antdepo/depots/DukesBank/deployments/ProjectBuilder/elements/var/null226801341.xml
    Loading stylesheet /home/demo/ctier/antdepo/depots/DukesBank/lib/ant/modules/ProjectBuilder/lib/load-objects/projectxml/project.xsl
    Mapping XML to properties ...
    Collecting object attributes ...
    Batching object attribute updates ...
    Batching resource and referrer updates ...
    Executing batch update ...
    [command.timer.load-objects: 20.873 sec]
    load-objects completed. execution time: 20.873 sec.

    C:\>ad -p DukesBank -m Deployment -c Register -- -name elements -type ProjectBuilder -basedir %CTIER_ROOT%\src\elements -installroot %CTIER_ROOT%\target\elements -install
    .
    .
    .
    [command.timer.Deployment.Register: 2.359 sec]

    C:\ctier\tmp>ad -p DukesBank -t ProjectBuilder -o elements -c load-objects -- -format projectxml -filename windows.xml
    Loading "C:\ctier\tmp/windows.xml" ...
    1 file(s) have been successfully validated.
    Processing C:\ctier\tmp\windows.xml to C:\ctier\antdepo\depots\DukesBank\deployments\ProjectBuilder\elements\var\null580203106.xml
    Loading stylesheet C:\ctier\antdepo\depots\DukesBank\lib\ant\modules\ProjectBuilder\lib\load-objects\projectxml\project.xsl
    Mapping XML to properties ...
    Collecting object attributes ...
    Batching object attribute updates ...
    Batching resource and referrer updates ...
    Executing batch update ...
    [command.timer.load-objects: 6.906 sec]
    load-objects completed. execution time: 6.906 sec.

  • Download the required third-party packages from the Internet and upload them to Workbench via their "PlatformZip" and "JBossZip" object pages: apache-ant-1.7.0-bin.zip, J2EE 1.4 Tutorial Update 7, jbossj2ee-src.zip (note that you must unpack the required Zip archive from the downloaded Zip), and jboss-4.0.3SP1.zip. (Note: Make sure that you set the DAV directory path to "/pkgs/DukesBank/zip/zips" to match the object XML when uploading the package files).
  • Install the objects:

    $ depot-setup -p DukesBank -a install
    "Install" command running for object: (AntBuilder) development
    "Install" command running for object: (ProjectBuilder) elements
    "Install" command running for object: (Updater) development
    "Install" command running for object: (Site) development
    "Install" command running for object: (JBossServer) development

    C:\>depot-setup -p DukesBank -a install
    "Install" command running for object: (AntBuilder) development
    "Install" command running for object: (ProjectBuilder) elements
    "Install" command running for object: (Updater) development
    "Install" command running for object: (Site) development
    "Install" command running for object: (JBossServer) development
  • Download the sample job definitions from Sourceforge and install them into Jobcenter using "Upload Job XML File ..." via the "Create a new Job ..." page.
3) Run
  • Deploy the packages that support building the application using Ant and deploy and start the empty JBoss server instance (you'll be able to pick up the JBoss server page at http://localhost:8180 since JBoss is configured to run on non-default ports to avoid colliding with Workbench):

    [jboss@development dukesbank]$ ad -p DukesBank -t Updater -o development -c Deploy -- -resourcetype 'AntBuilder|Site'
    Start: "Run the coordinated deployment cycle across the configured Sites." commands: dispatchCmd
    begin workflow command (1/1) -> "dispatchCmd -command Deploy -resourcename .* -resourcetype AntBuilder|Site" ...
    dispatching command: "Deploy " to: (AntBuilder) development, (Site) development ...
    .
    .
    .
    dispatched command: Deploy completed for: (AntBuilder) development, (Site) development
    end workflow command (1/1) -> "dispatchCmd -command Deploy -resourcename .* -resourcetype AntBuilder|Site"
    [command.timer: 29.828 sec]
    Completed: execution time: 29.828 sec

    C:\>ad -p DukesBank -t Updater -o development -c Deploy -- -resourcetype "AntBuilder|Site"
    Start: "Run the coordinated deployment cycle across the configured Sites." commands: dispatchCmd
    begin workflow command (1/1) -> "dispatchCmd -command Deploy -resourcename .* -resourcetype AntBuilder|Site" ...
    dispatching command: "Deploy " to: (Site) development, (AntBuilder) development ...
    .
    .
    .
    dispatched command: Deploy completed for: (Site) development, (AntBuilder) development
    end workflow command (1/1) -> "dispatchCmd -command Deploy -resourcename .* -resourcetype AntBuilder|Site"
    [command.timer: 41.531 sec]
    Completed: execution time: 41.531 sec

    (Note that this command can also be run from Jobcenter using the "development.Deploy" job and setting the "resourcetype" to "AntBuilder|Site" from the "Choose Options and Run Job ..." page).
  • Edit the client JNDI properties file in the J2EE Tutorial source to adjust the hard-coded naming provider port (this is necessary since JBoss is deployed using non-default ports to avoid colliding with Workbench):

    $ pwd
    .../dukesbank/j2eetutorial14/examples/bank/dd/client
    $ diff jndi.properties jndi.properties.orig
    3c3
    < url="jnp://localhost:1199"> java.naming.provider.url=jnp://localhost:1099
  • Run the application build and deployment cycle to build the Duke's Bank application from source, initialize the database schema and deploy to JBoss:

    $ ad -p DukesBank -t Updater -o development -c BuildAndDeploy -- -buildstamp 20080222.0
    Start: "Run the coordinated end-to-end the build and deployment processes across the configured Builders and Sites.." commands: Build,Change-Dependencies,Deploy ...
    begin workflow command (1/3) -> "Build -buildstamp 20080222.0" ...
    Start: "Run the coordinated build cycle across the configured Builders." Beginning build process with the following builders: (AntBuilder) development ...
    workflow command (1/1) -> "dispatchCmd -buildstamp 20080222.0 -command Build -resourcetype [^\.]*Builder -dispatchOptions buildstamp" ...
    .
    .
    .
    Completed: execution time: 31.322 sec
    end workflow command (3/3) -> "Deploy -buildstamp 20080222.0"
    [command.timer.DukesBank.Updater.development.BuildAndDeploy: 2:14.950 sec]
    Completed: End-to-end build and update process completed: Build,Change-Dependencies,Deploy. execution time: 2:14.950 sec

    C:\>ad -p DukesBank -t Updater -o development -c BuildAndDeploy -- -buildstamp 20080222.0
    Start: "Run the coordinated end-to-end the build and deployment processes across the configured Builders and Sites.." commands: Build,Change-Dependencies,Deploy ...
    begin workflow command (1/3) -> "Build -buildstamp 20080222.0" ...
    Start: "Run the coordinated build cycle across the configured Builders." Beginning build process with the following builders: (AntBuilder) development ...
    workflow command (1/1) -> "dispatchCmd -buildstamp 20080222.0 -command Build -resourcetype [^\.]*Builder -dispatchOptions buildstamp" ...
    .
    .
    .
    Completed: execution time: 11.313 sec
    end workflow command (3/3) -> "Deploy -buildstamp 20080222.0"
    [command.timer.DukesBank.Updater.development.BuildAndDeploy: 2:06.844 sec]
    Completed: End-to-end build and update process completed: Build,Change-Dependencies,Deploy. execution time: 2:06.844 sec

    (Note that this command can also be run from Jobcenter using the "development.BuildAndDeploy" job and setting the "buildstamp" from the "Choose Options and Run Job ..." page).
If you don't want to go through this setup process, I've prepared a CentOS 4.5 VMware virtual machine with the demonstration installed and ready to roll that I plan to post to Sourceforge imminently. I'll also post the DukesBank project archive file if you'd like to avoid the type/object model setup steps and cut right to the chase finishing up configuration and running the demonstration.

Thanks,

Anthony.

Elements module library source re-structured

I've just restructured the Elements module library source on Moduleforge to establish its own separate trunk, branches and tags.

The trunk has been copied from the 1.0 branch, and in addition I've established a new 2.0 working branch as a copy of the trunk.

New development will proceed on the 2.0 branch aimed at integrating an out of the box end-to-end Java Server build and deployment solution based on:
  • CVS and Subversion source code control
  • Cruise Control based continuous integration
  • Ant and Maven based builds
  • Tomcat and JBoss application servers
  • HSQLDB and Derby Java based databases
In addition, we plan to formally integrate and document the Duke's Bank sample application as a demonstration application.

By the way, I'm the only one who likes the name "elements" it seems, but there doesn't seem to be any constructive alternative "short" names out there that aren't acronyms that have already been grabbed by the Java guys (JSSL, JSL, etc)!

The "full" name of the library is "Java Server Module Library". Any ideas for a catchy moniker that could supplant "elements" in the source base, etc? Or, like me, are you satisfied with the "elemental" appropriateness of the existing name? Or, more likely, do give a toss at all?

Thanks,

Anthony.

Thursday, March 06, 2008

Converting Workbench modules for use with ProjectBuilder

ProjectBuilder is one of the key new features of ControlTier 3.1. This base type allows architects and administrators to manage their projects as a set of (XML) source using development tools of choice as opposed to solely using Workbench.

The thing is, many people already have sets of modules created under Workbench and therefore need to convert them for use with ProjectBuilder. Since this process is not currently well documented on Open.ControlTier, here's a quick cheat sheet:
  • Setup a minimal ProjectBuilder object for your project. By convention, the ProjectBuilder is given a library name ("mymodules" in this case) that reflects the combined intent of the set of modules it manages.

    $ ad -p MyProject -t ProjectBuilder -o mymodules -c Register -- -basedir \${env.CTIER_ROOT}/src/mymodules -installroot \${env.CTIER_ROOT}/target/mymodules -install
    .
    .
    .
    For more information about this object run: ad -p MyProject -t ProjectBuilder -o mymodules -c Get-Properties -- -print


    [command.timer.Deployment.Register: 4.123 sec]
  • By the way, the base directory is usually put under source code control to provide version management for the module source. A good example of this is the structure of the Elements Module Library source on ModuleForge.
  • Create the minimal base directory structure:

    $ mkdir -p $CTIER_ROOT/src/mymodules/modules
  • Copy in the latest version of your module(s) from Workbench's WebDAV working directory (Note that these commands assume you are setting up a ProjectBuilder development environment on the same system where Workbench is deployed. This need not always be the case):

    $ cd $CTIER_ROOT/src/mymodules/modules
    $ cp -r $CATALINA_BASE/webapps/webdav/MyProject/modules/MyBuilder .


  • Convert the module's RDF files to type XML format:

    $ ad -p MyProject -t ProjectBuilder -o mymodules -c convert-rdf -- -type MyBuilder
    .
    .
    .
    convert-rdf completed. execution time: 0.888 sec.

  • Remove extraneous RDF and property files from the module source:

    $ rm MyBuilder/*.rdf MyBuilder/*.properties
  • Test building the module from source and uploading it to the project:

    $ ad -p MyProject -t ProjectBuilder -o mymodules -c build-type -- -type MyBuilder -upload
    Building type using the buildmodule.xml via classloader

    converting type.xml for module: MyBuilder

    packaging module: MyBuilder

    Building jar: /home/anthony/ctier/target/mymodules/modules/MyBuilder-1.jar

    calling repoImport ...

    processing files in directory: /home/anthony/ctier/target/mymodules/modules

    Uploading jar: /home/anthony/ctier/target/mymodules/modules/MyBuilder-1.jar to server: localhost ...

    [command.timer.repoImport: 3.058 sec]

    repoImport completed. execution time: 3.058 sec.

    [command.timer.build-type: 4.112 sec]

    build-type completed. execution time: 4.112 sec.

QED.

Anthony Shortland.