Build Pipeline Plugin for Jenkins

Today I am going to look into how Jenkins deploy jobs as a pipeline. Build Pipeline Plugin renders upstream and downstream connected jobs that typically form a build pipeline. In addition, it offers the ability to define manual triggers for jobs that require intervention prior to execution, e.g. an approval process outside of Jenkins.

In my earlier post Run JJB to define jobs in Jenkins, I have demonstrated how to deploy jobs automatically by using Jenkins-Jobs-Builder plugin. Now, supposed developers like the jobs you deployed via JJB and want to ‘copy’ the same for their branches builds. By utilizing Build Pipeline Plugin, release engineer can streamline the jobs as pipeline and publish the pipeline definition as pipeline templates.

Before we go further, let’s see how Build Pipeline Plugin works?

1. Install Build Pipeline Plugin.

2.   Renders the upstream and downstream. After that, review the file system jobs/

drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_DeployQA
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_DeployStage
drwxr-xr-x 5 oracle dba 4096 Jul 3 2014 master_clean
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_build
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_code-analysis
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_functional-test
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_unit-test
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_package
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_sonar
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_generate-doc

On the dashboard


From Jenkins config.xml

< plugin="build-pipeline-plugin@1.4.3">
<owner class="hudson" reference="../../.."/>
<properties class="hudson.model.View$PropertyList"/>
<gridBuilder class="">

Ok. Let’s go to templatize the pipeline.

The templatize steps should be:

1. Get names of branches from remote GIT repository.

git clone ${GIT_URL}

`git branch -r | tail -n +2 | sed 's/^\s*origin\///' | sed 's/\s*$//' | grep -v master`

2.  foreach branch name, have a copy of the master_* directories under jobs/

3. for each new copied job, update the job config.xml file

4. Create the new pipeline in jenkins’

The script:

FULL_HOST_NAME=`hostname -f`

#shutdown jenkins peacefully
java -jar jenkins-cli.jar -s http://${FULL_HOST_NAME}:${HTTP_PORT}/ safe-shutdown

cd ${JENKINS_WEB}/jobs/
#git config --global http.proxy
#first clone the repo if it hasn't yet
git clone ${GIT_URL}
#foreach branch
for b in `git branch -r | tail -n +2 | sed 's/^\s*origin\///' | sed 's/\s*$//' | grep -v master`; do
 #foreach jobs/master_*
 for d in `ls master* | egrep ':$' | sed 's/://'`; do
 z=`echo $d | sed "s/master/$b/"`
 cp -r $d $z
 #reset nextBuildNumber
 echo 1 > $z/nextBuildNumber
 #remove historic builds log in 'master' pipeline - E.g, jobs/xxx/builds/2014-07-03_02-04-05
 rm -fr $z/builds/*-*-*_*
 #reset lastFailedBuild, lastStableBuild, lastSuccessfulBuild, lastUnstableBuild, lastUnsuccessfulBuild
 for f in `find $z/builds/ -type f`; do
 echo -1 > $f
 #re-configure config.xml file for the new job
 sed -i "s/master/$b/g" $z/config.xml
 #backup jenkins config.xml
 cp ${JENKINS_WEB}/config.xml ${JENKINS_WEB}/"config.xml_$(date +%F_%R)"
 #get the first line number of 'master' pipeline view
 fln=`nl ${JENKINS_WEB}/config.xml | grep "<" | awk '{print $1}' | head -1`
 #get the last line number of 'master' pipeline view
 lln=`nl ${JENKINS_WEB}/config.xml | grep "</" | awk '{print $1}' | head -1`
 let n=$lln+1
 sed -n "1,${lln}p" ${JENKINS_WEB}/config.xml > ${JENKINS_WEB}/config.tmp.xml
 #extract the configuration of 'master' pipeline view
 sed -n "${fln},${lln}p" ${JENKINS_WEB}/config.xml | sed "s/master/$b/g" >> ${JENKINS_WEB}/config.tmp.xml
 tail -n +$n ${JENKINS_WEB}/config.xml >> ${JENKINS_WEB}/config.tmp.xml
 mv ${JENKINS_WEB}/config.tmp.xml ${JENKINS_WEB}/config.xml

#restart jenkins server to reload the new pipeline
java -jar jenkins.jar

Run it.

In file system:

drwxr-xr-x 4 oracle dba 4096 Jul 3 00:41 develop_unit-test
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:41 develop_package
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:41 develop_generate-doc
drwxr-xr-x 4 oracle dba 4096 Jul 3 00:41 develop_functional-test
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:41 develop_DeployStage
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:41 develop_DeployQA
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:41 develop_code-analysis
drwxr-xr-x 4 oracle dba 4096 Jul 3 00:41 develop_build
drwxr-xr-x 3 oracle dba 4096 Jul 3 00:43 develop_sonar
drwxr-xr-x 5 oracle dba 4096 Jul 3 00:43 develop_clean
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_DeployQA
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_DeployStage
drwxr-xr-x 5 oracle dba 4096 Jul 3 2014 master_clean
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_build
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_code-analysis
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_functional-test
drwxr-xr-x 4 oracle dba 4096 Jul 3 2014 master_unit-test
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_package
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_sonar
drwxr-xr-x 3 oracle dba 4096 Jul 3 2014 master_generate-doc

Review the change in Jenkins’ config.xml:


You can see the script just copy the pipeline view configuration from ‘master’ but replacing its’ name with the name of branch.

Go to dashboard. Here it encounters a weird issue, the ‘develop’ pipeline was empty initially!!! I have not yet figured out why. But it pops up after I built its root job ‘develop_clean’ for the first time. 🙂

Here is the snapshot,



Nested View Plugin for Jenkins

“This plugin adds a new view type that can be selected when adding job views. This view does not show any jobs, but rather contains another set of views. By default, clicking on the view tab for a nested view shows a list of the subviews it contains (with folder icons). You can also configure a default subview to bypass this intermediate page and jump directly to that default view. Now the view tabs across the top show the views in this nested view, and the job list is for this default subview. This streamlines the navigation between views, but makes it harder to find the Edit View link for the nested view itself. Once a default subview has been assigned, navigate to the edit page by first clicking the plus (“+”) icon in the view tabs (for adding a new subview) and then find the Edit View link in the sidepanel.”

Nested View Plugin is a very useful plugin to maintain a friendly view structure.


Run JJB to define jobs in Jenkins

In post, Setup Jenkins-job-builder on Windows. I installed Jenkins-job-builder (JJB) on my Windows 7 laptop successfully. Now it is time to verify it is working.

1. Add C:\Python27\Scripts into %PATH% parameter so that you can issue ‘jenkins-jobs’ command.

2. Update jenkins-job-builder-0.3.0\etc\jenkins_jobs.ini as below, (Don’t need to bother about the user or password if your Jenkins is not secured)


3. Create a configuration file

- job:
 name: HelloWorld-YAML-9
 description: 'This job is created by YAML vi Jenkins-job-builder-0.3.0 automatically. Do NOT edit it manually.'
 project-type: freestyle
 block-downstream: false
 - git:
 skip-tag: false
 - pollscm: '@hourly'
 - ant:
 targets: "debug test install"
 buildfile: "build.xml"
 - junit:
 results: helloworld_junit-result.xml
 - email:

4. Test it.

D:\jenkins-job-builder-0.3.0\etc>jenkins-jobs test HelloWorld-job.yaml -o .

It will create one file named as ‘HelloWorld-YAML9’. It is exactlly the same with how Jenkins defines job.

<?xml version="1.0" ?>
 <description>This job is created by YAML vi Jenkins-job-builder-0.3.0 automatically. Do NOT edit it manually.</description>
 <scm class="hudson.plugins.git.GitSCM">
 <buildChooser class="hudson.plugins.git.util.DefaultBuildChooser"/>
 <submoduleCfg class="list"/>
 <triggers class="vector">
 <targets>debug test install</targets>

5. Install it.

D:\jenkins-job-builder-0.3.0\etc>jenkins-jobs –conf jenkins_jobs.ini update HelloWorld-job.yaml
INFO:root:Updating jobs in HelloWorld-job.yaml (None)
INFO:jenkins_jobs.builder:Creating jenkins job HelloWorld-YAML-9

The Jenkins console gets updated as below,

However in the Jenkins console it shows a warning message,

It is acceptable to leave unreadable data in these files, as Jenkins will safely ignore it. To avoid the log messages at Jenkins startup you can permanently delete the unreadable data by resaving these files using the button below.

Type Name Error
hudson.model.FreeStyleProject HelloWorld-YAML CannotResolveClassException: hudson.plugins.git.GitSCM

It is because by default git is not installed for Jenkins. Download and install git for windows and configure it.



6. Install it again.

D:\jenkins-job-builder-0.3.0\etc>jenkins-jobs –conf jenkins_jobs.ini update HelloWorld-job.yaml
INFO:root:Updating jobs in HelloWorld-job.yaml (None)
INFO:jenkins_jobs.builder:Creating jenkins job HelloWorld-YAML-9



It matches with what I configure in the YAML file.

Project Name: HelloWorld-YAML-9 | Description: This job is created by YAML vi Jenkins-job-builder-0.3.0 automatically. Do NOT edit it manually.

It is using git  | Poll SCM @hourly

Build: Invoke Ant debug test install | JUnit report: helloworld_junit-result.xml

7. The output in Jenkins  & file system

In progressConsole Output

Started by user anonymous
Building in workspace C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace
Wiping out workspace first.
Cloning the remote Git repository
Cloning repository
 > C:\Program Files (x86)\Git\bin\git.exe init C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace
Fetching upstream changes from
 > C:\Program Files (x86)\Git\bin\git.exe --version
Setting http proxy:
 > C:\Program Files (x86)\Git\bin\git.exe fetch --tags --progress +refs/heads/*:refs/remotes/origin/*
 > C:\Program Files (x86)\Git\bin\git.exe config remote.origin.url
 > C:\Program Files (x86)\Git\bin\git.exe config remote.origin.fetch +refs/heads/*:refs/remotes/origin/*
 > C:\Program Files (x86)\Git\bin\git.exe config remote.origin.url
Fetching upstream changes from
Setting http proxy:
 > C:\Program Files (x86)\Git\bin\git.exe fetch --tags --progress +refs/heads/*:refs/remotes/origin/*
Seen branch in repository origin/develop
Seen branch in repository origin/master
Seen 2 remote branches
Checking out Revision 1b93141b4a81dd0711fc19479834bb4384dd1937 (origin/develop, origin/master)
 > C:\Program Files (x86)\Git\bin\git.exe config core.sparsecheckout
 > C:\Program Files (x86)\Git\bin\git.exe checkout -f 1b93141b4a81dd0711fc19479834bb4384dd1937
First time build. Skipping changelog.
 > C:\Program Files (x86)\Git\bin\git.exe tag -a -f -m Jenkins Build #1 jenkins-HelloWorld-YAML-9-1
FATAL: Unable to find build script at C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace\build.xml
Build step 'Invoke Ant' marked build as failure
Recording test results
Sending e-mails to:
ERROR: Could not connect to SMTP host: localhost, port: 25
javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25;
  nested exception is: Connection refused: connect
	at com.sun.mail.smtp.SMTPTransport.openServer(
	at com.sun.mail.smtp.SMTPTransport.protocolConnect(

The code has been pulled down locally from github,

Volume in drive C is System
Volume Serial Number is 4499-5BD0

Directory of C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace

06/30/2014 01:11 AM <DIR> .
06/30/2014 01:11 AM <DIR> ..
06/30/2014 01:11 AM 128 .gitignore
06/30/2014 01:11 AM 6,739 pom.xml
06/30/2014 01:11 AM 24
06/30/2014 01:11 AM 91
06/30/2014 01:11 AM <DIR> src
4 File(s) 6,982 bytes
3 Dir(s) 52,732,325,888 bytes free

C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace>dir src\main
Volume in drive C is System
Volume Serial Number is 4499-5BD0

Directory of C:\Users\luhuang\.jenkins\jobs\HelloWorld-YAML-9\workspace\src\main

06/30/2014 01:11 AM <DIR> .
06/30/2014 01:11 AM <DIR> ..
06/30/2014 01:11 AM <DIR> bin
06/30/2014 01:11 AM <DIR> conf
06/30/2014 01:11 AM <DIR> java
06/30/2014 01:11 AM <DIR> scripts
0 File(s) 0 bytes
6 Dir(s) 52,732,325,888 bytes free


 However it fails to run the ‘ant’ target. This is expected as this is a maven project. I tried to update the builders tag with the content from however it doesn’t work for my case. I am still checking why.

        - maven-target:
            maven-version: Maven3
            pom: parent/pom.xml
            goals: clean
            private-repository: true
              - foo=bar
              - bar=foo
              - "-Xms512m -Xmx1024m"
              - "-XX:PermSize=128m -XX:MaxPermSize=256m"
            settings: mvn/settings.xml
            global-settings: mvn/globalsettings.xml

Continuous Database Integration

Database integration is one of the toughest parts in Integration. In this post I will review some keynotes in DI.

Firstly, all of the files related to DI should be source controlled. Ensure that your database scripts have been tested and verified.

Secondly,  let’s list all of the reproducible database steps.

1. Delete a database.

Delete a database and its data so later you can create a new database with the same name.

2. Create database.

Use DDL files to create new database.

3. Import data.

Use insert/import/etc. scripts to import data.

4. Migrate database and its data.

Migrate database schema and its data to a new environment.

5. Modify database objects.

Use DDL files to modify database objects.

6. Update testing data.

7. Backup/restore data.

My experience is,

For every build, I need to run the step 1, 2, 3 in an automated flows (basically this step will take 2-3 days), and then in step 4, I will use Oracle VM system to templatize my base environment as a template, so DEV/Testers can just clone their environments from the template and then they can have the same data with the base environment in minutes.

For step 5,6 we use Oracle Database Edition Technology. Every time we need to apply DDL and DML, we will apply them in the PATCH edition and only after that is successful we will cut over it to RUN edition.

For step 7, we just simply use oracle imp/exp tool to implement that.

During database development, DBA should not involve into data migration because this part should be strictly handled by automated flows and DBA should take part in Database performance and other priorities. And by providing DEV a template, DEV can have a clone environment in mnutes and we call those environments as Sandbox in that way DEV can develop independently.


For a practitioner of Release Engineering, Continuous Delivery is not merely just utilizing tools and doing some automated activities. It has to request everyone involved in the process of delivery to coordinate together to achieve the delivery goal. In this article I will introduce two Continuous-Delivery-Maturity-Models and explore how we can practice them in our continuous delivery activities.

The first Continuous-Delivery-Maturity-Model I want to introduce here is from

The model defines five maturity levels: base, beginner, intermediate, advanced and expert in five categories Information & Reporting, Test & Verification, Build & Deploy, Design & Architecture, and Culture & Organization. I think the advantage of this model is, it considers not only from the view of DevOps, it also consider Culure & Organization. Sometimes, the organization and it’s culture are probably the most important aspects to consider when aiming to create a sustainable Continuous Delivery environment. I ever worked for a company with a specified Build & Release Team however due to the company culture the team get dismissed after some years. So understanding how maturity of your company is can give you an understanding how far the Continuous Delivery and even the Release Engineering team can go.

However from the view of Release Engineers, the maturity of Culture & Organization is a bit out of control. A Release Engineer sometimes can not do much to change the Culture & Organization. And, in this model, it also introduces the maturity of Design & Architecture, we all know that as Release Engineer we need to involve into the architecture Design and also understand that the design and architecture of products will have an essential impact on our ability to adopt continuous delivery. But now, what I just want to share in this post is, how to manage your continuous delivery given that you have to go with your current architecture and organization culture. Hereby I will introduce another Continuous-Delivery-Maturity-Model below. It is a model listed in Book <Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation> and Continuous Delivery: A Maturity Assessment Model.

The model has 5 levels in categories ‘Build Management & CI’, ‘Environment & Deploy’, ‘Release Management (Compilance)’, ‘Testing’, ‘Data Management’, and ‘Configuration Management’. Let me summary them as below:

Level-5: Optimizing,

Build Management & CI: Every team has a fixed time to update each other, discuss integration problems, and can fix problems in an automated, faster feedback, and visual way.

Environment & Deploy: It can manage all of the environments effectively, all of the environment preparation can be automated and if possible, can use virtualization technology.

Release Management (Compilance): DevOps and delivery teams have a fixed time to co-work together to manage delivery risk and shorten the SDLC.

Testing: The deployment on Production works in most of the time and no rollback. All of the bugs can be found and fixed in a short time.

Data Management: The data for database performance testing and deployment can be reused and can provide feedback between two release cycles.

Configuration Management: Can review regularly whether configuration management can support an effective cooperation among teams, a fast development and an audit-able changes management.

Level-4: Quantitatively,

Build Management & CI: The build can not be in a Fail status for a long time and it should collect build metrics.

Environment & Deploy: The deployment management is being tracked carefully and the process of release and rollback can be tested.

Release Management (Compilance): The health of environments and application can be monitored in an early bird and cyclic way.

Testing: The quality can be tracked and metrics-ed and all of the non-functional requirements can be defined and metric-ed.

Data Management: It should monitor and optimize the database and the database upgrade and rollback testing can be taken care for every deployment.

Configuration Management: Developers should commit once into the mainline every day and should create a branch only when it is time to have a release.

Level-3: Consistent,

Build Management & CI: Check-in build and check-in testing. Dependencies can be well managed. Tools and scripts can be reused.

Environment & Deploy: Software Delivery can be automated and it should use an unified to deploy software to all of the environments.

Release Management (Compilance): Define and conduct change management. All of the policies and compliance can be satisfied.

Testing: Automated unit testing and acceptance testing. Testing is part of development.

Data Management: Changes in database can be automated and be part of the deployment pipeline.

Configuration Management: Libraries and dependencies can be well managed.

Level-2: Reproducible,

Build Management & CI: Can build and test software automatically and regularly. All of the builds should based on the latest code from code repository.

Environment & Deploy: All of the environment configuration should be sourced control and separated with source code. It will not cost too much time and effort to create a new environment.

Release Management (Compilance): It is a bit painful to release software but the release should be in a good quality. Part of the configuration from requirement inception to delivery can be tracked.

Testing: Automated testing is part of the development of user stories.

Data Management: Changes in database can be automated and can be sourced control.

Configuration Management: Source codes, configuration, build & deploy scripts, and data migration can be sourced control.

Level-1: Retardative,

Build Management & CI: Software build is in a manual way and can not manage artifact and reports well.

Environment & Deploy: Software deployment is in a manual way and the software packaging is environments dependent. The preparation of environment is manual.

Release Management (Compilance): Can not release in a good quality frequently.

Testing: Can do manual testing only after developments finish.

Data Management: Database migration is in a manual way and doesn’t do version control.

Configuration Management: No source control and the commit is not frequent.

This model is a very good and with a high practicability. As a release engineer, we have to involve into all of the categories ‘Build Management & CI’, ‘Environment & Deploy’, ‘Release Management (Compilance)’, ‘Testing’, ‘Data Management’, and ‘Configuration Management’ in our daily jobs. I think this model can help enhance our Continuous Delivery definitely.

I believe that we can use this model as our guide and it will help us to do continuous improvement.

Problems in Software Delivery

As a software engineer, the most important thing we need to resolve is, how to deliver the product to its customers?

In this page, I will share my thought in this.

Let me introduce a very typical continuous integration pipeline here. You can refer to my earlier post Continuous Integration for more details.

Build=>Unit Testing => Code Analysis => Staging => Deployment => Delivery

For many companies, the Release Day is the busiest day and why? That is because for most of the projects the risk in delivery is highest!

Let’s consider a scenario. In Release Day, Operations Team needs to prepare the OS environment, and then install 3rd-parties libraries and software. After that, Operations Team needs to transfer the to be deployed applications to the server and then follow the deployment guide provided by development team to configure the system. And if it is being deployed to a distributed environments, Operations Teams has to do above steps node by node. Okie, now everything is almost done. Operations Team tries to startup the applications however it fails out! — I think many engineers would have the similar experience as here.

Below is the summary of problems in software delivery:

1. You deploy your software manually – even for your testing, staging, demo environments.

The characteristics of this practice are:

# You have a very detailed document and you have to follow the document strictly to deploy your application

# You need to do manual testing to verify whether your deployment works.

# You need to work with development team very frequently to fix deployment issues.

# If your deployment is distributed, then you have to deploy them node by node.

# Deployment will take some time.

# The result of deployment is unknown. You cannot know whether your deployment can work or not.

# Your team doesn’t deploy or test on staging environment frequently. – Here I defined ‘Staging Environment’ as an environment with the same configuration as Production.

# You need to configure your production environment manually. In other word, you can not reproduce a duplicated production environment for other purposes in minutes.

In a word, you can not deploy your applications by clicking an ‘Enter’ button.

Okie, we all know that we can use Automated Continuous Integration/Delivery to fix above problems in Software Delivery. Fast delivery is very important nowadays, we need to have a fast, effective, and reliable way to delivery high quality and value products.

To achieve this goal, we need to delivery the products frequently and automatically:

1. Automated. If we can not automate our build, deploy, testing and delivery, then it is not reproducible. The result of every deployment is unpredictable because software configuration, system configuration, and the procedures can not be reproducible. That means we can not provide good quality products through our release engineering.

2. Do it frequently. If we can deploy very frequently then the difference between deployment with last deployment will be minor and this will reduce the risk dynamically and also it is easy to roll back the deployment if any issue happen.

Continuous Integration – Session

“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.” – Martin Fowler.

In some companies which don’t have a Release Team they don’t adopt CI even thought they claim that they are going with Agile Development. In my opinion, it is a fake Agile if no CI in their SDLC.

Nowadays it is very common that you have to get ready to deploy your artifact in anytime to satisfy the business requirement and the era that without a professional release engineer that you still can produce good quality product in a timely manner has gone! More and more organizations have their Build/Rlease Team, in other term, DevOps.

I ever had a sharing session with new hires. The session is about what is CI, and how we conduct CI, and the function of Build & Release Team. I hope that through that session ppl especially those new hires could understand well our SDLC.

I shared my slides below with necessary modification and pages removed to satisfy the company policy.

Below is a brief of my session,


Maintain a Single Source Repository
Checkin & Automate the Build
Everyone can see what’s happening
Make it Easy for Anyone to Get the Latest Execute
Make Build Self-Testing
Automate Deployment

The value of CI:

Reduced risks.
Reduce repetitive manual processes
Generate deployable software at any time and at any place
Enable better project visibility
Establish greater confidence in the software product from the development team

CI is not used to find out compile error, although it can.

Compile is the most basic thing CI do, compile error is not acceptable
The target of CI is helping find out integration /deployment problems as earlier as possible.
Ideally, a successful build in CI should:
 1. Compile succeeded
 2. Passed all unit test
 3. Unit test coverage reach the acceptable rate
 4. Passed all functional test and regression test
 5. Passed performance test
 5. Passed user acceptable test if necessary
Any successful build of CI could generate a deliverable package, so CI could & should give confidence to team members that our product can be deployed to production at any time.

CI is one of core practices of Agile, effective CI need the whole team follow other practices, on the other hand, CI could work with other practices to make the whole project better.

Test Driven Development
Automation Testing
Coding standard adherence
Small releases
Collective ownership


Commit code frequently
Don’t commit broken code
Fix broken builds immediately
Write automated developer tests
All tests and inspections must pass
Run private builds
Avoid getting broken code


Source code management –>
Source control system (like CVS, SCCS, Subversion) setup and maintenance
Setup and monitor daily continuous/production builds
Co-ordinate with the development team in case of build failures
Update build tools as per changes in the product output/strategies
Create branches and setup separate build system after a milestone releases
Create build storage archives for back tracking builds

Cross team co-ordination –>
Gather build specifications from the technical team
Document build instructions for the development team
Participate in release/milestone planning and scheduling with the product team
Announce and communicate vetted builds to QA and the documentation team
Work with the localization team to produce multi-language bits
Work with the sustaining team during product patches/upgrades
Coordinate with other dependent groups for their product milestone bits, e.g. aplication server, jdk, etc.

Build output –>
Setup kit server for build storage
Setup a cache server for storing other product bits or third party bits
Upload bits for internal and external customers
Create CD/DVD iso images and physical media
Code Quality Control–>
Setup code standard?
Monitor code quality trend

Software Engineering –>
Agile & CI governed
Automated the more the better
Workflow optimized

You can download the slide from Continuous Integration-Session