Wednesday, November 22, 2017

Pick the correct name

Picking up a correct name for the software component is one of the hardest things that every developer has to deal with. Here I refer component as anything in your source code, it can be a variable name, function name, class name or anything. Having a perfect name for the component is a huge achievement it improves the readability of your code so let's find out some of the qualities of a good name.



This is a swarm, in Sinhalese we called this animal as "thisara" which means it can live in water, air or land or can live in all three spaces. What a perfect way to define the behavior of an object by its name.





Let's list down some of the core qualities a name should have.

  • Readability.
  • Pronounceability.
  • Representing one context.
  • Do not use acronyms.
  • Follow two points are taken from a tweet of Robert C Martine ( AKA uncle bob )
  • The length of a variable name should be proportional to its scope. The length of a function or class name is the inverse.

Tuesday, October 10, 2017

How I performed a Redis fail over.

Recently we have encountered a failure, on one of our Redis nodes due to reaching the maximum number of clients connected. As an immediate action for resolving the incident, we wanted to modify the existing connection timeout property of the node and do a failover master node into a slave, since the master node is not responsive anymore.


The approach we followed for the first time.

We modify the redis.conf file on both master and the slave nodes. Then do a Redis service restart on the master node. Due to the service restart on the master, slave promoted itself into a master node.

The disadvantage of this approach is, once the master is restarting, it is losing any of the ongoing operations in the master node. This is not the appropriate approach recommended by the Redis.


The approach recommended by Redis.

Redis has inbuild command to failover master node into a slave node.
CLUSTER FAILOVER [FORCE|TAKEOVER]

we have used takeover option since both servers are running as expected and we just wanted to switch the master. Once we execute this command on the slave node, master stop consuming any new Redis connections and it waits till all existing connections complete their processing. Once completed, the master becomes the salve and hand over the master responsibilities to the other node. By following this approach, we did not lose any transactions like the previous approach.

Modify config values during the runtime.

We have used,
CONFIG SET
ex :
CONFIG SET timeout 70

For listing all the config, you can use

CONFIG GET *
ex:
CONFIG GET timeout

With this config set command, it will effect immediately on the instance but, keep in mind to change the Redis.conf file in order to apply the change in case of a service restart otherwise you will lose any changes you have done during the runtime.

Sunday, July 16, 2017

Are you limiting your rate with Thread Sleep ?

Let me start with two scenarios where you may have encountered before.

Scenario 1: You have an application which connects to a queuing service and suddenly you lose the network connectivity.

Scenario 2: Your application invoking an external API and it starts to reject your calls due to the high rate of access.

If we look at those two, you may think it's really simple to solve those issues by sleeping the thread. For the first scenario, you can define a maximum number of retries and sleep in between each retry. For the second scenario, imposing explicit thread sleep will reduce the rate of your API access.

Let's assume we are checking connectivity each second and assume we got the connectivity by 6.5 seconds. If we use a sequential approach such as thread sleep, we are attempting 7 times to establish a connection.

When re connecting we have to consider two key aspects

  1. Limit number of attempts ( each attempt is an overhead to the application)
  2. Reconnect should happen as soon as connectivity back to normal.

If we increase the sleep time to reduce the attempts it leads us to violate the second aspect. So it is important to understand that sequential approach is not the best approach for many cases. There are other sequences that may be more suitable for your scenario such as linear, Fibonacci.


Limiting the rate of calls with spring-retry

With spring retry we can try out few different approaches, it can be a linear or Fibonacci or a custom approach.With the latest spring versions you may find this as a feature of the frame work but if you are not using spring or using an older version of spring framework, you can use this library as a dependency.So we are no longer required to use thread sleep as our default wait mechanism as well as we are not required to re invent the wheel when we wanted to try out some other re trying sequence.


Please visit the spring retry git hub project for more information and examples.

Sunday, July 9, 2017

Are your dependencies safe to use?

Using components with known vulnerabilities is the ninth item described by the OWASP top ten and widely ignored item in application security.According to the article "The Unfortunate Reality of Insecure Libraries" most of us are not aware of that, our application contains well-known vulnerabilities.
Follow are some interesting finding of the article.

  • 29.8 million (26%) of library downloads have known vulnerabilities
  • Security libraries are slightly more likely to have a known vulnerability than frameworks
  • Java apps are likely to include at least one vulnerable library
  • The most downloaded vulnerable libraries were GWT, Xerces, Spring MVC, and Struts 1.x


So it is really important to inspect our dependencies very frequently against those known issues.
OWASP dependency checker is a tool which is used to identify vulnerabilities in Java. Since it comes with Maven, Gradle, Ant plugins it is really easy for a developer to inspect those vulnerabilities. It also comes as Jenkins plugin so we can easily integrate and check periodically for vulnerabilities without human intervention.

Integrating OWASP Dependency checker for Maven.
Include following plugin in to your plugins section of the POM file.

<plugin>
 <groupId>org.owasp</groupId>
 <artifactId>dependency-check-maven</artifactId>
 <version>2.0.0</version>
 <executions>
    <execution>
                 <goals>
                         <goal>check</goal>
                       </goals>
                 </execution>
        </executions>
</plugin>


Once plugin is configured we can invoke the plugin by executing following maven goal

mvn clean install

It will cross check your dependencies against the vulnerability data base and generate a report if anything suspicious available. But keep in mind there can be false positive results as well.

Example report:

[INFO] Analysis Complete (5 seconds)
[WARNING] 

One or more dependencies were identified with known vulnerabilities 
in prject name: jar-file-name-1.3.1.jar (jar-file-name:jar-file-name:x.x.x,
cpe:/a:groupId:artifact_id:x.x.x) : CVE-2020-9999, CVE-2020-4444444 See the dependency-check report for more details. [INFO]

Sunday, June 11, 2017

Gettting rid of hell of code merge

If I asked you to list down what are your most awkward moments in software development life cycle that you are facing, you definitely list merging codes as the top most item. Since our code merge tools are not intelligent enough to perform semantic code merging, almost all the tools fail to merge two source code file where more than one developer has changed on the same line on the same file. We can hope that in future there will be tools to address the issue, but for now, we have to admit that.

Is that all we can do?

It is true that we can not totally get rid of the merge hell but we can take some measures to reduce the complexity. I've seen teams consume more time to merge their code than it took for the actual development. So what are the factors that determine how complex your code merge?

Basically, there are two.
  • The size of the code chunk that you are about to merge.
  • The time gap between the last merge.


complexity = size * duration.

If you need to reduce the merging complexity, then you have to reduce the both factors. In continuous integration we often say, integrate frequently with non-breaking code chunks.

In reality, it is not easy to commit small code chunks frequently that are not breaking anything or any others work. Usually, developers wish to isolate their work from others. If you are a git lover you have branches, if you need deep isolation you choose forks. It's good to be isolated but some point you are in big trouble.

Hassle-free integration.

You are not the first victim of this issue, most of the organisations have struggled with this and tried out new approaches to this address this merge complexity. Trunk based development aka TBD is an approach that successfully adopted by industry giants which encourage teams to work on a single branch(trunk) which leads less code merge problems and introduce plenty of new problems. However, it is worthy to try this approach since so we are soo fed up with huge merges.

for more info:

https://trunkbaseddevelopment.com/