Use Synapse to workaround AWS ELB static IP limitations


The system receives data from a third-party service using TCP sockets. The service requires a static IP address to send the data. Several app nodes are created to receive and process data. AWS ELB is used as the load balancer for app nodes. But currently AWS ELB only has host name, but no static IP address. AWS has elastic IP addresses which are static, but cannot be associated with ELB.

Solution #1 - HAProxy (Not working)

The first solution I had was to use a HAProxy server as the proxy to AWS ELB. Install HAProxy on one EC2 instance and assign an elastic IP address to it. HAProxy receives data and forward to ELB.

Issue with this solution is that HAProxy only resolves DNS names during start. So once HAProxy starts and IP address of ELB changes, there is no way to detect that and HAProxy keeps sending to old IP address.

Solution #2 - Synapse

Synapse is a service discovery system from Airbnb. It builds upon HAProxy. Synapse provides certain watchers which watch changes. Once changes are detected, Synapse generates a new HAProxy configuration and reloads HAProxy. Application talks to HAProxy instead of the actual proxied service.

Back to the problem, I used Synapse to replace ELB. Synapse has a watcher ec2tag which can watch tags of EC2 instances. To add/remove instances from Synapse, just add/remove certain tags. For example, Synapse watches tag name/value env=test of EC2 instances. Once a new instance with tag env=test is launched, Synapse detects this change and update HAProxy config file to include the new instance. The new instance now is able to receive data. Load balancing is provided by HAProxy.

Install Synapse

It's recommended to install Synapse directly from GitHub master branch. Release 0.11.1 has some issues. For example, if you're using bundler, add following to your Gemfile:

gem 'synapse', :git => 'git://'  

If you're using Chef, use gem_specific_install cookbook.

gem_specific_install "synapse" do  
  repository ""
  revision "master"
  action :install


Synapse configuration is done by a YAML file synapse.conf.yaml. In this file, you define services and HAProxy configuration.

          name: "elb"
          host: "<elb-host>"
          port: 7000
        method: "ec2tag"
        tag_name: "env"
        tag_value: "test"
        aws_access_key_id: "<aws-key>"
        aws_secret_access_key: "<aws-secret>"
        aws_region: "<aws-region>"
        port: 3200
        server_port_override: "7000"
        server_options: "check inter 2000 rise 3 fall 2"
          - "mode tcp"
          - "mode tcp"
    bind_address: ""
    reload_command: "service haproxy reload"
    config_file_path: "/etc/haproxy/haproxy.cfg"
    do_writes: true
    do_reloads: true
      - "log local0"
      - "log local1 notice"
      - "user haproxy"
      - "group haproxy"
      - "log global"
      - "balance roundrobin"
      - "timeout client 50s"
      - "timeout connect 5s"
      - "timeout server 50s"

In the Synapse config file above, services section defines different services to watch. For myservice, default_servers section contains the fallback servers when no servers are discovered, here I used the ELB server. discovery section contains configuration about different discovery methods. For ec2tag, you need to provide AWS access key & secret, region and tag name/value to watch. haproxy section contains local HAProxy configuration for this service. In the example, HAProxy for myservice listens on port 3200 and forwards traffic to app nodes no port 7000. The second haproxy section contains global HAProxy configurations.


Copy the YAML file to some place, e.g. /etc/synapse.conf.yaml, then start Synapse using synapse -c /etc/synapse.conf.yaml.

Solution #3 Nginx (untested)

Nginx seems to have better support of DNS resolution, so it may work to use Nginx as the proxy.

Spring RestTemplate Basic Authentication

I'm using Spring RestTemplate to consume REST service with basic authentication, so I need a way to set the username and password. After running some searches, it turns out that it's not that easy to set the username and password directly. So I manually created the Authorization header.

import org.apache.commons.codec.binary.Base64;

HttpHeaders headers = new HttpHeaders();  
headers.set("Authorization", "Basic " + new String(Base64.encodeBase64((username + ":" + password).getBytes(Charset.forName("US-ASCII")))));  
HttpEntity<byte[]> entity = new HttpEntity<byte[]>(headers);  
ResponseEntity<byte[]> response =, HttpMethod.valueOf(httpMethod), entity, byte[].class);  

Maven failsafe plugin to fail builds

Maven failsafe plugin is used to run integration tests. If you only add this plugin to integration-test phase, it won't fail the build. So you will have builds with failed integration tests. This design is to make sure the post-integration-test phase can run and tear down the environment correctly. This is because integration tests usually involve preparing the environment (DB, file system, network, etc.) before tests run, so cleaning up is required. To check the result of integration tests, verify phase needs to be used and fail builds correctly in this phase.

From the manual,

The Failsafe Plugin is used during the integration-test and verify phases of the build lifecycle to execute the integration tests of an application. The Failsafe Plugin will not fail the build during the integration-test phase, thus enabling the post-integration-test phase to execute.

So mvn verify should be used to invoke Maven when running integration tests.

A typical example of failsafe plugin configuration should be:


Atlassian Bamboo

If you are using Atlassian Bamboo to build, make sure **/target/failsafe-reports/*.xml is added to test results directory. Then Bamboo can correctly display integration tests results. The option Look in the standard test results directory may not work due to this bug as **/target/failsafe-reports/*.xml is not added to the standard test results directory.

Bamboo Maven failsafe settings

KML circle generator

KML Circle Generator is a small app I wrote to generate KML circles used for Google Earth. You cannot create circles directly in KML files. The idea is to use KML <Polygon> element to create circles. If you google "kml circle generator", you'll find a lot of apps already there to use. My app is more like an experiment for me to try out Play framework with Scala. I also tried to address some limitations in existing apps.


Easy selection of circle center

Some existing apps require user to manually input circle center's geo-location (latitude and longtitude). This is not user-friendly as the user has to use other tools to find the geo-location first. In my app, I embed Google Maps and allow user to drag & drop the marker on the map to select circle center.

Style customization

Customization of circle's style is easier and intuitive. User can customize circle's fill color and line color/weight.

Built-in preview

After installing Google Earth web plugin, user can preview generated circles on the same page. The user can make changes and preview again easily.

Behind the scene

The app is built using Play framework and AngularJS, written in Scala, CoffeeScript, LESS and HTML. Template and design are from HTML5 UP.

The app is hosted on Heroku using free plan, so the performance may not be ideal.


Select circle center

Select circle center

Select circle center

Spring Testing transaction management

Spring Testing is a good tool for testing application written using Spring framework. It has convenient built-in transaction management for integration tests. By default, Spring starts a transaction for each testing method and rollback the transaction after testing method is executed. Methods annotated with @Before and @After are also executed in the same transaction. By doing this, no change is actually made to the database, so you don't need to clean the database manually after each test.

Although this automatic transaction management is considered harmful for some cases, it's very handy for most cases. I did encounter some cases when you had to find workrounds.

In one test case, some database setup is required for all testing methods. So a method with @Before is created with necessary code to do that. In the acutal testing method, a background service is triggered to run some tasks and then the result is verified. In this case, the background service is running in a separate thread and reads data created in @Before method from database. But because @Before and current testing method is running in the same transaction, before the testing method finishes and the transaction is committed, the data changes are not written to database. So the background service cannot see the data and always fails.

Programmatic transaction management

To workaround this, I changed the @Before to @BeforeTransaction, which makes the method executed before the transaction starts. Then use programmatic transaction management to commit the transaction.

protected PlatformTransactionManager transactionManager;

public void setup() {  
  DefaultTransactionDefinition definition = new DefaultTransactionDefinition();
  TransactionStatus transaction = transactionManager.getTransaction(definition);

  //database setup


As shown in code above, data created in setup method is written to database and visible to following testing methods.

JUnit execution order

Another solution is to leverage test execution order introduced in JUnit 4.11. The idea is to make the @Before method as a testing method, but is executed before other testing methods.

@TransactionConfiguration(transactionManager="transactionManager", defaultRollback=false)
public class MyTest {

  public void testMethod0() {
    //database setup

  public void testMethod1() {
    //actual testing code

As shown in the code above, defaultRollback of @TransactionConfiguration should be set to false, then database changes won't be rolled back. Use @FixMethodOrder(MethodSorters.NAME_ASCENDING) to tell JUnit to execute methods in ascending order of method names. testMethod0 is the method for database setup and is executed before the actual testing method testMethod1.

No automatic transaction management

If automatic transaction management introduces more trouble than it solves, you can just disable the automatic transaction management.

To disable automatic transaction management for a test class, use @TestExecutionListeners and exclude TransactionalTestExecutionListener.class from listeners. By default, TransactionalTestExecutionListener is included.

@TestExecutionListeners(listeners = {DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class})
public class MyTest {


For individual test, use @Transactional(propagation = Propagation.NOT_SUPPORTED) to exclude single testing method from transaction.


This actually shows a very common case in daily development. 95% of time, good framework , like Spring, can help you a lot. But the rest 5% of time, you'll need to find the answer youself. In this case, unfortunately 95% of online resources cannot help you. You have to dig into the reference guide and source code to find out the answer.

Scala trait

Trait is a Scala language feature which is not familiar to Java developers. Traits are similar to classes. They can have fields and methods and maintain state. You can do anything in a trait definition as in a class definition, with only two exceptions.

First exception is that a trait cannot have parameters passed to the primary constructor. For example, following code is invalid.

trait MyTrait(myVal: Int) {


To workaround this, you need to use abstract val. For example, to parameterize MyTrait, use code below.

trait MyTrait {  
  val myVal: Int

Then initialize an instance as below.

new MyTrait {  
  val myVal = 1

The second exception is the behavior of super calls. In traits, super calls are dynamically bound, depends on how traits are mixed into concrete classes. This is to support stackable modifications when using traits.

Stackable modifications

As super calls are dynamically bound for traits, traits can be used to support stackable modifications. stackable means the result of modifications depends on the order of how those modifications are stacked. Scala has a process called linearization to determine the actual targets of super calls.

In code below, StringSource is an abstract class with only one method getContent. BasicStringSource extends from StringSource and wraps a string as the source.

abstract class StringSource {  
  def getContent(): String

class BasicStringSource(val content: String) extends StringSource {  
  def getContent() = content

Then we create three different traits to modify the content from StringSource. Uppercase turns string into uppercase. Reverse reverses order. Pad pads the string with *.

trait Uppercase extends StringSource {  
  abstract override def getContent(): String = {

trait Reverse extends StringSource {  
  abstract override def getContent(): String = {

trait Pad extends StringSource {  
  abstract override def getContent(): String = {
      super.getContent().padTo(20, '*')

abstract override is required for members of traits.

These three traits can be mixed into BasicStringSource with different orders to achieve different goals.

object TraitSample extends App {  
  val source1 = new BasicStringSource("Hello World") with Uppercase with Reverse with Pad

  val source2 = new BasicStringSource("Hello World") with Pad with Reverse with Uppercase

The output of code above is

DLROW OLLEH*********  
*********DLROW OLLEH

Roughly speaking, traits are applied from right to left as shown in definition. For example, in definition new BasicStringSource("Hello World") with Uppercase with Reverse with Pad, Pad is applied first, calls getContent in Reverse, then calls getContent in Uppercase. getContent in Uppercase returns HELLO WORLD. getContent in Reverse returns DLROW OLLEH. getContent in Pad returns DLROW OLLEH*********.

The actual linearization order of BasicStringSource("Hello World") with Uppercase with Reverse with Pad is a chain shown below:

BasicStringSource -> Pad -> Reverse -> Uppercase  

When super is called in the code, implementation to the right of linearization order is the actual implementation to call.







  • 由于没有新西兰的job offer,EOI的打分不会很高,会导致申请之后的等待时间过长。
  • 被移民局批准的几率不高。移民局批准的最重要的指标在于移民官是否认为你可以在新西兰找到工作。如果你提供的材料足以证明这一点,那被批准的几率就大很多。在跨国公司的工作经验很有帮助。多半情况下,移民官会给一个9个月的签证让你来新西兰找工作,找到工作就可以移民。























MySQL - Table name case sensitivity

I was working on some data migration tasks. Remote database is MySQL running on Amazon RDS. To improve the speed, I firstly imported data into my local MySQL instance on Windows, then restored the data to remote database. But when application started, it couldn't find any table. This is because Hibernate is looking for table names with different cases.

In Java code, entities are annotated like below:

public class User {  

When data was imported locally, the table names had become to lower-case, like user, but not User that Hibernate was looking for.

Short-term solution

Update MySQL system variable lower_case_table_names to 1. In fact, 1 is the default value on Windows platform, that's why all table names become lower-case. As mentioned in MySQL doc, this variable should always set to 1 for InnoDB.

If you are using InnoDB or MySQL Cluster (NDB) tables, you should set this variable to 1 on all platforms to force names to be converted to lowercase.

For self-hosted MySQL instance, use --lower-case-table-names=1 when starting MySQL. For RDS, add a new parameter group and update variable lower-case-table-names to 1.

Long-term solution

Table names should all be lower-case for consistency across different platforms. Underscores can be used to separate different parts, e.g. user, user_role, customer_feedback.

This can be done using @Table annotation like @Table(name="user") or Hibernate's ImprovedNamingStrategy. Set configuration hibernate.ejb.naming_strategy in Hibernate as below:

<property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.ImprovedNamingStrategy"/>  

c3p0 for Java 6

If you are using c3p0 for db connection pooling and also using Java 6, be sure to use version c3p0-0.9.5-pre8, no the latest version c3p0-0.9.5-pre9. Starting from version c3p0-0.9.5-pre9, c3p0 interface com.mchange.v2.c3p0.PooledDataSource extends from Java 7's java.lang.AutoClosable, so c3p0 c3p0-0.9.5-pre9 cannot run on Java 6. See source code 0.9.5-pre9 and 0.9.5-pre8 for the difference.

Atlassian Elastic Bamboo - Update Maven Settings

When using Atlassian Elastic Bamboo to build Maven project, it's a common task to update Maven settings.xml to add private repository information, e.g. credentials to access company's private repository.

Below are two approaches I find to update Maven settings.

Add settings.xml to code repository

The first approach you can take is to add the settings.xml to your code repository, then specify the path to settings.xml file using -s option of mvn command.

Suppose the settings.xml is in the root directory of your project, use mvn -s settings.xml clean deploy as the command line to invoke Maven.

Update settings.xml in Bamboo image

If an Amazon EBS volume is added to the Bamboo agent, you can directly change the settings.xml file. Atlassian has a guide on how to do this. Below is a much simpler guide on how to do it.

  1. Start the Bamboo agent
  2. Edit the file /mnt/bamboo-ebs/maven/.m2/settings.xml. Not the one /home/bamboo/.m2/settings.xml. /home/bamboo/.m2/settings.xml is copied from /mnt/bamboo-ebs/maven/.m2/settings.xml after agent started.
  3. Find the EBS volume used by the running agent.
  4. Create a snapshot from the EBS volume.
  5. Update Bamboo elastic image configurations to attach the new snapshot.
  6. Done!