niedziela, 30 maja 2021

Should or should not log defects in Agile

 The most important statement about bugs is: "Do not manage bugs, fix them". I would maybe add "and learn". When we discover a bug we need to remember that not only we've discovered a flaw in our software but also in the process we follow.

First thing first, let's define a bug at the beginning - it is any deviation from expected result. I made it so general for a purpose. We need to remember that bugs are not only related to the software we're developing but also for ex. to a build process, deployment process that we follow, and many others.

When it comes to the bug we have one simple question to ask: is the bug related to current stories (stories we work on in a current sprint)?
  1. yes - fix it
  2. no 
    • solving bug won't jeopardize sprint goal - solve it within the sprint
    • solving bug will jeopardize sprint goal - estimate it, add to your backlog and prioritize against other stories
Obviously I'm not talking about production issues as treating these beasts requires a different set of rules, including a simple rule: abandon anything you're doing and put 150% effort into solving the damn bug ;-).

How to deal with circular dependency

There are various types of circular dependencies:

  • instantiation dependency - object A includes object B which in turn includes object A, hence in order to create object A you need to have in hand object B, but in order to create object B you need to have object A...
  • operational dependency
    • object A invokes methods on object B and object B invokes methods on object A
    • classes from package com.abc depend on classes from package com.xyz, while classes from com.xyz depend on classes from com.abc
In case of instantiation dependency it is impossible to create neither objects A nor B. In case of operational dependency it might be hard to understand the operational flow and responsibilities of given module/package.

So how to solve it?

First of all let's assume that we need to keep the relation between two objects but we also need to solve the circular part of it. Let's use following classes as an example:

class Customer {
    List<Order> orders
}

class Order {
    Customer owner
}
There are conceptually three ways we can undertake:
Solution 1 - introduce an object that would hold the problematic dependencies
Solution 2 - Introduce an object which both problematic classes would depend on but the newly created objects would not depend on each other
Solution 3 - Merge both problematic classes so that it is one

This solution requires deep understanding of the business as it indicates that what we considered as separate concepts should/could actually be modeled as one. 

piątek, 31 sierpnia 2018

Miscaptured user stories


Every new requirement has various aspects  that we need to take under consideration and it is really easy to omit important bits and pieces. Throughout my career I've came across two mnemonics that were supposed to help you remember all different aspects that one should consider when developing new piece of software: CUPRIMDSO & FURPS. The issue with these mnemonics is that they mean.... nothing and are a bit old hence a bit outdated. So here is a handy mnemonic: 
"DUDE, no-one wants to work with SCRAPS of a  MISCAPTURED story"

This tool can be used when capturing a story to see if we understand how to approach particular aspect but also as a definition of done helping to check if we've done all that we wanted, or consciously decide that given aspect is not relevant.

First lets demystify the mnemonic and then we will deep dive into each aspect:

D - distributed
U - ubiquity
D - deletability
E - estimability

M - maintainability
I - installability
S - security
C - configurability
A - accessibility
P - performance
T - testability
U - usability
R - reliability
E - evolvability
D - documentation

S - serviceability
C - customization
R - recoverability
A - agility
P - portability
S - scalability

Distributed

 Most of us are working in a distributed environment hence we need to deal with all sorts of distributed computing problems:
  1. From implementation perspective we need to take care of fallacies of distributed computing
  2. From testing perspective we need to ensure that the contract between two services (provider and consumer) is automatically tested and that we will not accidentally break it - Consumer driven contract come to rescue here

Ubiquity

Ubiquity is related to the glossary we use in our project. The idea is that the same words should be used to describe business concepts when talking with users, product owner, between ourselves. Furthermore these concepts should be found in the code, both production and test code. This so called Ubiquities Language concept comes from Domain Driven Design and the main purpose behind it is to make communication between various parties easier. Another benefit is that when reading the code we understand the concepts hence it is easier to develop/amend the existing model. Having said that here are the questions that we need to ask ourselves:
  1. Should we add a new definition to our glossary?
  2. Should we amend existing definition?
  3. Should we remove existing definition?

Deletability

Concept of deletability has been presented to me by Greg Young in his great talk "The art of destroying software" - sounds like a talk for a tester doesn't it 😉. The additional question we ask to ourselves is:
  • Am I able to add the new requirement in such a way that it will be easy to remove it, or will I tangle it into the existing solution? 
There are many reasons why we'd like to be able to quickly/easily remove features, to name just a few:
  1. Features that are easy to remove are relatively easy to understand as they tend to:
    1. have clear interfaces 
    2. be less coupled with the existing solution 
  2. Since they are relatively easy to understand, they are also relatively easy to refactor

Estimability

There is only one question that we need to ask to ourselves:
  • do I have enough of information to actually estimate the story?
I would ask the question regardless if you're estimating or not. The purpose is not to estimate, the purpose is to put your mind into state when it is keen to identify missing pieces of information.

Maintainability, Mobility

There are couple of additional question we need to ask when talking about maintainability:
  1. Does the new requirement fit into current model? 
  2. Can we implement it without huge amount of "if"s?
  3. Are responsibilities of different classes/modules/services well defined?
If you answered 'yes' to at least one of the above questions, rethink the model that you have in your application or the requirement, either will require redesign.

We are leaving in a mobile world, hence we need to think how the new feature will be presented on various hand-held devices. How to fit all we need to fit on the screen?

Installability

When it comes to installability, there are two most important questions:
  1. Assuming that you have a one-click-deployment, will it remain this way after implementing the story?
  2. If you have manual steps during deployment, will the story bring you closer to one-click-deployment?
These two points are the most important, but besides that there are still a few scenarios that we need to consider (remember to thing about default and custom installation procedure - e.g. user picks custom directories):
  1. Fresh install - vanilla case, default values
  2. Re-install (the same version)
  3. Upgrade (new version over old one)
  4. Downgrade (old one over newer)
  5. Uninstall

Security

Security is a really deep topic, I will propose just few more questions to think about when talking about how given story/requirement influences security of our application:
  1. Are we adding any sensitive data to our application?
  2. Are roles currently defined in our application sufficient for the new story?
  3. Are interfaces required by the story secured?

Configurability

When it comes to configurability, think about two aspects:
  1. Things that will change between environments - these certainly needs to be configurable, as before your software reaches Production environment, it will surely go through Continuous Integration environment and probably some User Acceptance Testing (Staging) environment - each of them will need to have different configuration (e.g. different url to a services which our application is using) 
  2. Things that support engineer (you :-)) would like to change in order to tune the application without a need of building a new artefact and going through procedure to deploy new artefact to Production environment (unless in your ecosystem it is really cheap to deploy stuff on Production - then you can be a bit more relaxed about this rule)

Accessibility

Accessibility has at least two different aspects:
  1. Disabled people - it is good if given information is conveyed to a user using at least two different ways (e.g. if you want to show an error message to a user you can show a message in a red frame, but what if a user is colour-blind? Hence you might not only put the message into a red frame but also put there a red cross sign)
  2. How easy it is for a regular user to access given feature - A/B testing might be really helpful to assess how easy it is

Performance

Quite obvious point I think:
  1. Does the story require adding a long-lasting activities (e.g. report generation)?
    • Is there a possibility to do given operation in an asynchronous mode?
    • What is more important latency or response time?
    • Is it possible to scale the problem out?
  2. Does the story affect performance of our application?
    • Do we need to add/amend performance scenario?
    • Do we need to add/amend data used for performance tests? 
    • How users will be using this new functionality (often, once per week/month, etc)?

Testability

Two most important things:
  1. Do we know how to test the story on various levels (at the beginning end-2-end would be the most important, but we need to think about lower level tests as well)?
    • Will we need additional resources (e.g. test DB) for automated tests? 
  2. Does the application expose proper API so that we can get all the information we need to assert if it works properly?
The thing that is often overlooked is that fact that in order for an application to be testable it sometimes need to expose additional API for testing purposes. We need to design our application for testability. This rule apply to all levels:
  1. unit - avoid static methods, separate object creation from usage
  2. integration - define clear (easy to cut) contexts with well defined entry/exit points
  3. end-2-end - define business interfaces that are required to assert if application is working properly

Usability

When it comes to usability the question is fairly simple: is our application user-friendly? There are various more fine-grained metrics we can use to assess the user-friendliness:
  1. How efficient it is to achieve a goal in our application?
    1. How much time does it take?
    2. How many clicks user needs to make?
    3. How many screens user needs to go through?
  2. How intuitive is our application? 
    1. How much time user spends trying to figure out how to use our application?
    2. How many screens has the user seen before finding what she wanted?
  3.  How much do users like our application?

Reliability

As I alerady said I know that the software we create is fault-less, however production environment is a dangerous place. Various things may happen:
  1. Server on which our application is deployed unexpectedly goes down
  2. Disc gets unmounted
  3. When trying to log some vividly important message, we run out of disk space
  4. Service which we are using becomes unreachable
  5. Our database that is located on a separate server becomes unreachable
  6. etc.

Evolvability

I really like the division into two different quality aspects:
  1. Quality of a feature - does it fulfil users' expectation?
  2. Quality of a design - is it easy to evolve the architecture further?
When talking about evolvability I have in mind the "Quality of a design". It requires a certain sense of art to develop solution that does not add things that will be hard to change in the future. Here is a very nice talk about Evolutionary architecture.

Documentation

When talking about documentation we need to distinguish three different aspects:
  1. User documentation - it should be treated the same way as any other product we deliver to the customer (when agile manifest talks about "working software over comprehensive documentation" - this is not user documentation that it has in mind).
  2. Architectural documentation - this one should be kept as close to the code as possible as it tends to be outdate and rot when noone looks at it. A good practice is to keep it as readme file in the code repository. It is easy to exaggerate with it so think twice before putting an information to the readme - even though it is close to the code, IDE not always will recognize that when refactoring your solution you also need to refactor your readme.
  3. Self-documenting code - the best book I've read so far about self documenting code is the "Clean code" by Robert C. Martin
    1. Method/Class names - the main idea here is that one should be able to find in a code business domain.
    2. Executable documentation - tests are one of the ways of documenting the behaviour of the software, especially with various BDD frameworks which allow us to write sentences in a natural language which are then translated into executable pieces of test code, which is run against our application.
  4. Project documentation - project charters, high level design, low level design, test plans, test execution S-curves, etc. - keep it as small/short as possible - this is the "comprehensive documentation" that we value less then "working software". Let's be honest, it is needed to some extend - especially if you are working in an environment that needs to go through various Audits - but the goal is to use it, if you create a document which noone reads a red sign should appear in your head.

Servicability 

We are creating a perfect software that always works as intended.... until it doesn't. Then the first thing we usually do is to look into logs and see what has happened. When implementing new story we need to think about information that we want to log (consider various logging levels: debug, info, warn, error). This might seem obvious but often when implementing new story a difference between debug and info or warn and error might be pretty blurred.
I found following question somewhat helpful when deciding if given log should be debug or info:
  1. INFO - contextual information needed to understand what is currently happening in our system, stated in a business language
  2. DEBUG - low level technical information, needed for debugging 
Mind that we are having a trade-off here between verbosity and readability. I advocate to have a readable code, not cluttered with log.debug() every second line. Whenever you want to add a debug log, consider rather adding a test that would check whatever you are afraid of. Having that in mind log.debug() is rather exceptional. INFO logs on the other hand should show you a clear paths through your application, just be reading it you should know what business actions has been taken.
When it comes to distinguishing between error and warn I use such simple  rule:
  1. WARN - we have an invalid situation but application knows how to handle it and does not require admin intervention
  2. ERROR - we've encountered and erroneous situation that might require admin manual intervention
  3. FATAL - it's so bad that application is about to commit "suicide"

A few generic comments about logging:
  1. Enough context information - when your application is processing some messages, put for example an id of the processed message, if user is performing some actions put id of a user, etc.
  2. Log all relevant pieces of information in one line - as a rule of thumb, do not log a multiline messages - it is really hard then to grep all information that you might be needing when investigating given problem
  3. Be friendly to the tools you use to analyse logs - consider following log message "processing of message (id: 123, version: 1) started" and "processing of message started (id=123, version=1)". Both contain the same information but the former one is splunk-friendly. Splunk will automatically recognize that there are variables "id" and "version". It will even allow you to use them in queries (e.g. "'processing of message started' AND version>2 | stats count by id"). Basically think about how you will be then searching through your logs.
  4. Security aspects - who is going to read your logs? Don't log sensitive data.
  5. Log message consistency - especially when two log messages are related e.g. "processing of message started (id=123, version=1)" and "processing of message finished (id=123, version=1)"

Customizability

Customizability is rather oriented towards user. So how user can customize the way our application looks or behaves.

Recoverability

There are basically two questions that we want to ask:
  1. Having experienced a failure, will our application recover by itself?
  2. Assuming that it will recover, can I relay on the information that will be presented to me?

Agility

When thinking about Agility, you try to assess if you are able to deliver the story in an agile way (the fact that you call it a story instead of requirement does not count ;-)). Several aspects are to be taken under consideration:
  1. Team's business knowledge/intuition .vs. Product owner availability - even though it seems at the beginning that you know how to implement/test the story I can guarantee that unexpected questions will arise. Having said that it is important that either team has a great deal of business knowledge so that they themselves are able to answer these questions or Product owner is highly available.
  2. Team understands the value that story delivers so that they are able to come up with a good-enough solution
  3. Story is fairly short so that it fits into an iteration or, if you use kanban, it does not plug your pipeline
  4. Team has enough knowledge to estimate the story (using whatever units)
  5. Team knows how to demonstrate the story

Portability

I myself do not have much experience with creating portable software, hence I will relay on wiki article Software Portability. Having said that I totally agree that even though this "illity" may not be applicable for all the software we write it should not remain unrecognized.

Scalability 

When talking about scalability, I obviously have in mind concept of scaling-out and not scaling-up. Let's briefly cover the difference between these two models:
  • scale-up - bring more computational power (stronger CPU, more/faster RAM, SSD instead of HDD, etc.)
  • scale-out - spin-up more instances of your service, so that they do the computation in parallel
Here are some helpful questions:
  1. Does the story require implementing some complex/heavy algorithms?
  2. How well the story fits our current scaling-out model?
  3. If we don't scale-out (e.g. it is not yet needed), do we have an idea how the story could have been scaled-out?

poniedziałek, 16 grudnia 2013

Good/Bad .vs. Appreciative Inquiry Retrospective

Lately I had a great chance to participate in a Coach Retreat session organized by Oana Juncu with cooperation with Code Sprinters in Krakow. The idea of coach retreat is nicely described on Oana's blog hence I won't be describing it here.

The thing I wanted to write about is a way how we can use one of the coaching techniques (called Appreciative Inquiry) while doing Agile Retrospectives.

The main idea behind AI to focus on the place we want to be and ways of how we can get there (as oppose to focusing on problems that we can find on the way). The best way to understand new technique is to compare it with something we already know. Let's compare regular problem solving approach with AI:


One might think that it is just a re-wording, a game of words but I perceive it more as a mindset. The founder of AI David Cooperrider in his article Appreciative Inquirey in Organizational Life states that over-focusing on a problem-solving techniques can actually limit your imagination and kill your potential that is actually needed to overcome difficulties. There is a danger that you will start perceiving a step as a goal, loosing the original goal out of sight. Sometimes it even happens that the more you are focused on a problem, the more you bound yourself to it and the more difficult it is actually to deal with it.

One of the biggest problems with retro is that while discussing current situation/problems teams fall into a fin-de-siecle mood: we've had these problems, we still have them, basically we're in deep shit and nothing can be done about it. AI technique does not allow for such mood to enter the room.

Here is a proposal of how your retro board could look like (it's taken almost precisely from wiki article about AI):


Design part which is about planning and prioritizing a process that would work well wasn't on the board when we were doing this style of retro but it was the discussion that was happening in the room.

One problem we as a team had with AI style retrospection is that we felt really uncomfortable talking about our strengths, and we left this row almost empty (only one sticky-note appeared there).

All in all, I must say that I have a great team that does really rarely fall into negative (not-constructive) mood hence it might have been easy to introduce such type of retrospection.
On the other hand I can imagine that such type of retro can serve well for team which often fall into such negative mood but it may require a skilled scrum master to shape the discussion using appreciative inquiries.

sobota, 26 października 2013

Behaviour Driven Development

The company I work for has been using BDD for over 2 years right now. I think it would be good to define what BDD means for us (as possibly BDD is similar to Agile in a way that you'll have hard time finding two people for whom it means the same). Couple of points that visualise our mindset:

  1. We do not add feature, we add an ability so our product is able to act under certain, new circumstances (obviously we first need to understand the business context/circumstances from much wider perspective then previously)
  2. Since we understand business much better (we have small business trainings at the beginning of working on every new user story - it was difficult at the very beginning as there was much to learn - it's much easier today) it is much easier for us to think about different business border cases and still being engineers we come up with cases that non-engineer would rather not come up with which usually also ends up as separate examples.
  3. We do not have business documentation as we fill up so called system tests with bunch of examples how our system behaves under certain circumstances - after these two years I can say that for us this is the most difficult part (there are certain traps waiting for thee one that will tray to follow this path - I will write about them later on)
  4. We have also introduced BDD on unit test level (discussion under this blog post describes what it means quite well: http://dannorth.net/2012/05/31/bdd-is-like-tdd-if/)
 
There are a few tools that support BDD (ex. JBehave, Cucumber), I can tell about JBehave only as this is the only tool we've been using, nevertheless I believe that whichever tool you choose, problems you'll encounter will be similar.

JBehave is just a tool and at some point in time we realized that we've hurt ourselves with it - we simply used it in a wrong way. I will write just short sentences but you must know that behind each sentence there was blood and sweat:
  1. JBehave is an overhead - you suddenly need to maintain twice as many files (story + implementation). Use it only if someone will be reading story files, only if it truly be treated as documentation. If this is not going to be the case, think what is it that you want to achieve with JBehave because there is a price that you going to pay.
  2. Think twice before you'll use JBehave on lower (then system tests) test levels. Our experience showed that JBehave is great to visualise some main concepts in the software (main paths through the SW) but it is not the best tool to explain some complicated and-or-or-and-and logic as in order to understand the core of such logic you need to grasp multiple stories at once which is really difficult.
  3. Examples alone are sometimes not enough - JBehave gives a special key word ("narrative") to add some context but in our case we needed to add more description of a context then it seemed intuitive at the beginning (after couple of months we found ourselves in quite usual situation when something that what obvious some time ago is not so obvious anymore)
  4. Do not use technical language - at the end of the day it is to be read by Business guys and they are not really interested in XPaths or threads
  5. Hope that Uncle Bob won't have anything against it that i'll quote him as a fifth point: “You don’t get a special license to write a highly coupled tests just because you’re doing BDD” by R.C. Martin

Fifth point is taken from the discussion that took place under Dan North's blog post I've already mentioned in this post.

I hope to find some more time in the nearest future to create some post about much more technical aspects and pitfalls.

poniedziałek, 11 lutego 2013

Money and self organized teams

Managing a self-organized team is a challenge... especially when it comes to salary. On one hand side we can not leave it  up to the team to decide how much their salaries will be raised. On the other hand asking the team for an opinion about given team member when it is time for his annual appraisal is so obvious that even if the team wants to be honest and is mature, the opinion may be skewed. It seems that there is no easy way out from this situation.

It seems that the only way is to leave it up to the manager to decide but a situation when a salary raise depends on one person only leaves a room for an abuse and by definition does not even try to be objective.

Here is a proposal of an algorithm that can be used to calculate a salary raise in a self organized team. An algorithm that promotes team-work, tends to be as objective as possible, does not introduce "an acid atmosphere" in the team but on the other hand side it gives precise numbers.

According to my model salary raise is composed out of two factors:

R = R1 + R2

1. (R1) Inflation + Salary ranges

To compute this part (R1) we can employ an equation for a restoring force
I - inflation factor
k - restoring factor
slow - lower boundary of a salary range
shigh - higher boundary of a salary range
smean  = (shigh + slow)/2 - mean of a salary range
scurr - current salary
k = I / (shigh - smean)

if (scurr < shigh) {
    R1 = I - k * (scurr - smean)
} else {
    R1 = 0
}

Short analysis:
- if you are below them mean of your salary ranges then this restoring force will pull you up, you will get an extra boost as you are underpaid
- on the other hand if you are over the mean of your salary ranges this factor in the most extreme situation (scurr >= shigh) will be equal to zero (never negative)

2. (R2)Performance factor

Everyone's opinion, appropriately gathered and analyzed compose the core of this factor (weights can be applied, but to be honest, in order to make the process as fair as possible I wouldn't introduce them). Just for the sake of this example let's assume that we have two teams cooperating with each other and every team consists of 7 people.

Step1. you ask every person to put all others in a row, starting from the one she likes to work with the most.

After this step is taken you have [7 (team1)+ 7 (team2)] 14 lists with [7 - 1 (all team-mates except author of the list) + 7 (all mates from the other team)] 13 names each.

Step2. you take all the lists together and sum them up in a way that if a given person is the last person on a given list you assign 0 points, if a person is at the first place on a given list she gets 12 points.

After this step is taken you have one list with pairs of names and score / weights.

In our current example you can simply compute that a given person can at most get score of 12 (points) * 13 (lists) = 156 points.

Having such a list and a budget for salary raises it is very easy (just by using a proportion) to compute how much of a pay raise a given person should get. If you'd like to take under consideration for example client's opinion with a higher weight (let's say 70%) you can ask client to assess every person from both teams by assigning a number ci from a range 0..156 (or you can perform a simple normalization of 156 down to 100, which would probably be more intuitive for a client). 

ci - weight of an ith person assigned by a client
scorei - weight of an ith person got directly from lists
wi - final weight of an ith person

wi = 70% * ci + 30% * scorei

B - budget for salary raises
si - current salary of an ith person


 

 

Advantages:

1. By combining subjective opinions of all people that co-work with a given person you get the opinion that is the closest to the objective one.
2. This algorithm promotes team-players over individualists
3. It is easy to employ additional weights if needed

Disadvantages:

1. Being forced to put your team-mates in an order usually creates a dissonance (especially putting someone on the last position) - that's why there should be another process of giving feedback introduced in the team so given person knows what (s)he should be working on. In that way being fair with each other we encourage ourselves to grow and develop.
2. It is best if an appraisal process is triggered for everybody simultaneously, other way it is possible that people being constantly forced to put their team-mates in a row would become frustrated.

 

Step-by-Step simulation

Team of 6 people + client:

Alex (Senior engineer): 1000$  
Jane (Senior engineer): 900$
John (Junior engineer): 500$
Mark (Principal engineer): 1600$
Anna (Junior engineer): 800$
Barbara (Senior engineer): 1200$

Junior engineer salary ranges: 500$ - 900$
Senior engineer salary ranges: 800$ - 1200$
Principal engineer salary ranges: 1100$ - 1500$

Inflation: 5%
Budget for salary raises: 600$

Alex's list:  Jane, John, Mark, Anna, Barbara
Jane's list: Alex, Barbara, John, Anna, Mark
John's list: Alex, Mark, Barbara, Jane, Anna
Mark's list: John, Alex, Anna, Barbara, Jane
Anna's list: Barbara, Alex, Mark, Jane, John
Barbara's list: Anna, Jane, Mark, John, Alex

Alex: 4 + 4 + 3 + 3 + 0 =14
Jane: 4 + 1 + 1 + 0 + 3 = 9
John: 3 + 2 + 4 + 0 + 1 =10
Mark: 2 + 0 + 3 + 2 + 2 = 9
Anna: 1 + 1 + 0 + 2 + 4 = 8
Barbara: 0 + 3 + 2 + 1 + 4 = 10

Total number of points possible to get: 4 * 5 = 20
After normalizing scores to range 0..100 we have:

Alex: 70
Jane: 45
John: 50
Mark: 45
Anna: 40
Barbara: 50

Client's opinion about every team-mate:

Alex: 80
Jane: 75
John: 60
Mark: 65
Anna: 90
Barbara: 80

Final score every person obtains (having in mind that we value clients opinion more [70%]):

Alex: 0.3 * 70 + 0.7 * 80 = 21 + 56 = 77
Jane: 0.3 * 45 + 0.7 * 75 = 13.5 + 52,5 = 66
John: 0.3 * 50 + 0.7 * 60 = 15 + 42 = 57
Mark: 0.3 * 45 + 0.7 * 65 = 13.5 + 45.5 = 59
Anna: 0.3 * 40 + 0.7 * 90 = 12 + 63 = 75
Barbara: 0.3 * 50 + 0.7 * 80 = 15 + 56 = 71

After putting all equations together we obtain following values:
Alex: 12,85%
Jane: 14,23%
John: 15,81%
Mark: 6,02%
Anna: 10,15%
Barbara: 7,24%


niedziela, 10 lutego 2013

Statement coverage .vs. Branch coverage .vs. Path coverage

This post is for these who would like to prepare themselves for ISTQB exam and have difficulties with understanding the difference between various types of coverage. Let's consider following piece of a code:

public int returnInput(int input, boolean condition1, boolean condition2, boolean condition3) {
  int x = input;
  int y = 0;
  if (condition1)
    x++;
  if (condition2)
    x--;
  if (condition3)
    y=x;
  return y;
}

Statement coverage
In order to execute every statement we need only one testcase which would set all conditions to true, every line of a code (statement) is touched.

shouldReturnInput(x, true, true, true) - 100% statement covered

But only half of branches are covered and only one path.

Branch coverage
You can visualize every "if-statment" as two branches (true-branch and false-branch). So it can clearly be seen that the above testcase follows only "true-branches" of every "if-statement". Only 50% of branches are covered.

In order to cover 100% of branches we would need to add following testcase:
shouldReturnInput(x, false, false, false)

 With these two testcases we have 100% statements covered, 100% branches covered

Path coverage
Nevertheless there is still a concept of path coverage. In order to understand path coverage it is good to visualize the above code in a form of a binary tree













As you probably see the above two testcases cover only two paths t-t-t and f-f-f while in fact there are 8 separate paths:
1-2-3
t -t -t - covered with testcase 1
t -t -f
t -f -t
t -f -f
f -t -t
f -t -f
f -f -t
f -f -f - covered with testcase 2