Question: Best practices to keep developers from breaking each others' code

Please carefully read the Disclaimer and Terms & conditionsT&Cs.
Priced at $20.00

Actions: Add Comment

ankos714 

Customer

 27 Jan 2011 17:14 UTCThu 27 Jan 2011 - 5:14 pm UTC 

Greetings, aside from version control, what are the best practices to keep one developer from breaking the code of another developer. Often, someone will break another's code, won't realize they've done so, and then our team spends hours searching for the root cause of the fault. We want a list of best practices in this area so that we can be sure to follow them. Any comments on specific tooling is welcome as well. Thanks

 

catman 

User

 28 Jan 2011 18:04 UTCFri 28 Jan 2011 - 6:04 pm UTC 

What kind of projects are you developing, and in what language? What are you using for version control? How large of a team is it? What software development methodology do you use (eg Extreme Programming, Agile, etc).

Generally speaking, you need to start with a software development model, version control (as you mentioned), then on to automated integration tests and good unit tests.

I can address it more specifically with answers to your question above.

 

ankos714 

Customer

 30 Jan 2011 18:04 UTCSun 30 Jan 2011 - 6:04 pm UTC 

We are developing database-driven websites. The team consists of 4 PHP+Mysql backend programmers and 5 CSS+JavaScript frontend programmers. We use a development model that is closest to Scrum. Our sprints are 2 weeks of coding and 1 week of testing+bug fixes. Requirements are frozen for the duration of the sprint. We use Subversion for version control and JIRA for ticketing & sprint management.

 

Hailstorm 

Former Researcher

 2 Feb 2011 04:10 UTCWed 2 Feb 2011 - 4:10 am UTC 

Hi ankos714,

If you're using version control, they why are you spending hours searching for the root cause of the fault?  Just rollback to the most current working version, then use comparison software such as WinMerge or Beyond Compare to look at the differences.

 

ankos714 

Customer

 5 Feb 2011 07:18 UTCSat 5 Feb 2011 - 7:18 am UTC 

We have version control. If we roll it back, yes, that removes the conflicting features but it doesn't solve the fundamental problem. That is, the logic that one developer uses to implement a new feature is conflicting with the logic that another developer uses to implement a different new feature. If we want both features we need to find a way to move past this challenge. Part of the answer appears to be "continuous integration". Namely, accepting that this conflict will happen but catching it as soon as possible.

 

ankos714 

Cancelled

 5 Feb 2011 07:19 UTCSat 5 Feb 2011 - 7:19 am UTC 

 

Roger Browne 

Researcher

 5 Feb 2011 11:04 UTCSat 5 Feb 2011 - 11:04 am UTC 

Ankos714,

Technically, the answer is straightforward, as others have hinted.

It should be the responsibility of every developer to create and maintain a comprehensive set of unit tests. The rule should be that no developer can check in any code that causes someone else's test to fail.

The team as a whole must create and maintain a comprehensive integration test suite.

Occasionally, a breaking change will still get committed (because no test suite is 100% comprehensive). When this happens, a new test is added which catches that break, and the software is rolled back one commit at a time until the test no longer fails.

Now let's turn to the "fundamental problem" that you explained just one minute before you cancelled the question: "the logic that one developer uses to implement a new feature is conflicting with the logic that another developer uses to implement a different new feature". This is an inherent problem with any development methodology that deliberately synchronises the implementation of different features (in your case, by partitioning the project into fixed-length "sprints").

Software development proceeds with far fewer conflicts if each incremental enhancement has its own timescale. Instead of a "two week sprint", an easier enhancement might be done in a week and a longer enhancement might be better suited to a three week development spurt. This requires continuous integration.

In this way, if developer A implements a new feature in a certain way, and checks it in, and it passes all the tests, then the obligation is on developer B to develop in such a way that it passes all of the tests (including A's new tests). This forces developer B to use logic which is compatible with that of developer A.

There are other reasons to avoid synchronised sprints, including efficiency considerations. Easy tasks tend to cause slacking near the end of the development phase (because they were finished early), and hard tasks tend to get rushed at the end or implemented poorly to meet the deadline, which just stores up problems for the future. But no doubt your organization had some counter-balancing reason for choosing this kind of methodology. In my experience the most common reason is that management decision-making is too slow to support a more flexible process.

 

mathtalk 

Researcher

 5 Feb 2011 20:38 UTCSat 5 Feb 2011 - 8:38 pm UTC 

Here are my thoughts, which I unfortunately was still polishing.  The basic premise is that project work is rife with conflicts, and good management dictates approaches that minimize that conflict and channel the valid trade-off opportunities into productive resolutions.

[System architecture and project roles and responsibilities]

The description given of your team (4 backend + 5 frontend programmers) makes it kind of a two-tier picture.  I think you may find surprising advantages to rethinking of it as three tiers:



This may be a big departure not just for how you think about the system architecture but from the actual implementation.  The three-tier architecture enforces that the frontend does not directly access the database but only indirectly through the middleware layer.

How does this help with reducing developer conflicts?  You really have to see it in practice, because the effect is stronger than one imagines from theory.  One thing is that you have fewer interactions, because the interfaces are reduced to the two layers on the left and the two on the right.  This is not just a matter of fewer rather than more interactions; it's primarily a matter of decoupling the frontend and the database schema changes, so that "integration builds" can be performed regularly.

In practice this means that when a change request has to be implemented, division of responsibilities is often obvious, and developers who are concerned about the impact of a change will have a clearer idea of with whom they need to negotiate.  Keeping developers working "inside the lines" (demarcation of boundaries) is a significant factor in reducing conflicts & focusing the legitimate conflicts into hands capable of making good resolutions/tradeoff decisions.

There are also a couple of important techniques that are enabled by three-tier systems which deserve a write-up of their own.  The first is automatic code generation, particularly as database schema changes have to be propagated to the middleware layer.  If done correctly, such changes can be made and checked for "conflicts" with the rest of the system using the compile and linkage cycle.  Again one needs to see an example of this in practice to appreciate its power.  Let me point out in this connection your description of the project as "developing database-driven websites."  This emphasis suggests a high-reuse opportunity for generic database access classes.

The other "enabled technique" is an automated regression test bench.  Since the UI interacts with the rest of the system only through an interface defined by the middleware (Apache-hosted php code?), it is possible to capture the browser requests in a test bench database and "playback" the interactions doing some basic consistency checks on the results.  While this does not in practice eliminate the role of a QA person, it increases the pace with which flagrant discrepancies, intended or not, can be found between releases.  Given your tightly-paced development schedule, this sounds like something that would be critical.  Remember that regression testing can be done on nightly integration builds, and that the earlier code breaks and other bugs are found, the more cheaply they can be fixed.  I would return to the topic of QA and regression testing in its own section.

[Code walk-throughs and code reviews]

Since the question is about reducing inter-developer code conflicts, no discussion would be complete without broaching the topics of code-walkthroughs and code reviews.  Project managers are often faced with a "difficult sell" to developers even when the project schedule is more slowly paced than what you describe.  Here by code walk-through I mean having a developer present the outline of their implementation to others (perhaps just one other) before detailed coding is done.  The opportunity is to catch "reinventing the wheel" in the bud and replace a more error-prone ad hoc approach with one that has stood the test of time in your local project tradition.

By code review I mean presenting the "unit tested" implementation to others for compliance with team standards (API reuse, commenting, version control).  Just knowing that someone else whose opinion you value will be seeing your effort is a strong incentive to do best quality work.  This also presents an opportunity for spotting unanticipated consequences by those who've "been there/done that".

Depending on the "gelled" nature of a project team and the level of cooperation, a good bit of advice and consent may already be accepted as part of the "local tradition".  However it should be recognized and reinforced by the project manager.

From a project management perspective it is important to track the "bugs prevented/improvements introduced" that come about from these meetings.  It's not just to help the developer realize the benefits, but also to make the case to "company management" that the time is well spent.

[Regression testing and quality assurance]

Noticeable by the limited mention in the brief project write-up of "ticketing of JIRA" are the roles and responsibilities for QA.  In my experience the amount of time needed for testing is often badly underestimated by those putting a project plan together for the first time.  As a rule of thumb, even taking advantage of that which can be (and should be) automated, the resources that are required for testing are three times that required for coding!

Part of the best practices is including test cases/reproduction of bugs or absence of "features" in the ticket database, possibly through links to results in the test bench tool proposed earlier.  As bugs are resolved and features added, some comfort can be had in knowing their fixes are bound to future integration tests.

One topic I wanted to explore connects the quality assurance responsibilities to the "database-driven" nature of your websites.  I'm picturing that your developers are working in what I call a "near production" setting, in which they work with a test/development version of one or more customer websites, and that acceptance of changes is done there before being directly rolled out to production.

Of course this is playing with fire, but given a modest scope of agreed upon changes for each rollout cycle, it may be a reasonable approach.  But it perhaps would be helpful to allow customers to do their own acceptance testing for changes independently of the developers.  This isn't always practical, as the customer may have a naive expectation that things will "just work" without their involvement in testing.  But when the customer is willing and able, it can be really beneficial to have those additional resources looking out for unanticipated problems.

regards, mathtalk

 

Actions: Add Comment

 

Frequently Asked Questions | Terms & Conditions | Disclaimer | Privacy Policy | Contact Us | Spread the word!

Tue 28 Mar 2017 - 4:16 am UTC - © 2017 Uclue Ltd