QCon London 2011 – Thursday

March 13, 2011

This was my first visit to QCon, unfortunately only managed to get to go for one day, but it was well worth it. Here are my quick notes from the talks I made it to.

Patrick Copeland – Innovation at Googlevideo

  • Ideas on their own are pretty worthless
  • You are better off making sure you have the relentless innovators who can turn ideas into something real
  • Test your ideas as cheaply as you can to see if they will work in the real world
  • Aim to fail fast, and try lots of things
  • Use Pretotypes – low tech, cheap simple “pretend” mockups of ideas to see whether they make sense in practice – e.g. for phone apps sketches on paper is adequate
  • If you go ahead to build a working prototype then get stats on returning visitors to measure if people will actually use it
  • If users keep returning over a period of time then the idea is worth pursuing – if you get sustained growth then you might have a Facebook on your hands

Ulf Wiger – Testing for the Unexpectedslides

  • Random testing is initially more effort than unit testing, but pays of in the long run as system complexity grows
  • Various problems with traditional test methods:
    • The spec can rarely be trusted as it is just someones idea of what correctness is
    • It is hard to anticipate what the unexpected scenarios you need to test for are
    • coverage is an unreliable metric, so you don’t get a sense of how good your testing is
  • Random testing with a tool like QuickCheck flushes out gaps in spec and is great for finding issues with particular combinations of activity, or issues with certain inputs (e.g. negative numbers in factorial example)
  • To use random testing you effectively needed to formalise your spec around expected inputs
  • If you are testing bad inputs, then best practice is to send messages that are just a little bit bad
  • Stateful components are much harder to test
  • Have a good error handling strategy for your app, so you can effectively test it
  • QuickCheck can help you home in on the precise combinations/inputs that cause errors

Jon Jagger – Deliberate Practiceslides

  • Agile is about being able to change direction quickly – not just delivering quickly
  • If you practice something you can already do you reduce effort, awareness and change
  • Deliberate Practice is doing something you can’t quite do, which increases effort, awareness and change
  • A Team is a group of people who learn together
  • Designing Deliberate Practice:
    • Addition – don’t try stopping bad habits it doesn’t work (Orange Penguins), you need to displace them by learning good habits
    • Challenge – it has to be something you can’t quite do already – note this means it might not be fun
    • Pair Practice – the extra discussion and feedback generated will magnify the value of the practice
    • Coaching – a good coach should be able to break down learning into small manageable chunks
    • Visibility
    • Feedback – we are all bad at self-evaluation, so need others to feedback – but it isn’t actually feedback if nothing changes as a result
    • Aim is not to complete the task, but to learn from it
  • Books referenced in the talk:

Roy Osherove – Team Leadership in the Age of Agileslides

  • 3 stages of agile team maturity each with their own needs from a team lead
  • Chaos – team always fire fighting, too much to do, no time to learn, constantly being pulled in different directions
    • Main goal is to get out of survival mode
    • Lead needs to take control – a bit of command and control wouldn’t hurt here
    • Focus on shielding the team from management whims
    • Get daily standups, build automation, code reviews, build by feature and pair programming in place
    • Don’t worry about TDD yet, team has too much else to do
  • Learning – team has capacity to learn (slack), but aren’t yet self organising
    • Lead needs to go more into coaching mode here – don’t fix their problems for them
    • Mantra for dealing with problems should be “What are you going to do about it?”
    • Think about commitment language – be specific, make sure people commit to a precise thing, by a precise time
    • Build culture of integrity – ream should notify if commitments won’t be met
    • Useful influence model – need both motivation and ability at the personal, social and structural level
  • Self Organising – team is fully self organising
    • Influence team direction by changing constraints and goals, but in general get out of their way
  • Books

Jurgen Appelo – Complexity vs Lean: The Big Showdownslides

  • Complexity Theory is a big messy area taking ideas from various field, including Systems Thinking, Chaos Theory, Game Theory and bunch of others
  • Simple & Complicated refer to how easy it is to understand the structure of a system
  • Ordered, Complex & Chaotic refer to how easy it is to predict a system
  • Jurgen has identified 6 main areas for successfully managing teams in his Management 3.0 book
    • Energize People
    • Empower Teams
    • Align Constraints
    • Develop Competence
    • Grow Structure
    • Improve Everything
  • Jurgen went through the 7 pillars of Lean and the 5 elements of Kanban. For each of them he showed that “it is probably a bit more complicated than that”. See his slides for details.
  • In summary, while the Lean tools are useful, they don’t represent the entire picture, so don’t follow them blindly without thinking for yourself.
  • Books

Jez Humble – Remediation patterns – how to achieve low risk releasesslides

  • Tree main ways to reduce the risk and impact of bugs in production
    • Prevention – don’t release buggy code
    • Low Risk Releases – release patterns to enable fast rollback, or to minimise the impact of issues
    • Incremental Delivery – by minimising the amount of change in each release you minimise the risk
  • Basic assumption of talk – people are running unit tests and using continuous integration (i.e. no feature branches)
  • Deployment Pipelines (Unit Test -> Acceptance Test -> Manual Testing -> Deploy)
    • Don’t start one stage until release candidate has passed other stages
    • If failures detected at any stage, then automated tests of previous stage not good enough
    • Hard Stuff
      • Production Like Environments
      • Testing cross functional requirements (e.g. performance or security)
      • Acceptance test maintainability
  • Automate Deployment
  • Ensure developers, testers and support/operations people talk about changes
  • Canary Releases
    • Release to a limited number of power users first
    • Because previous version still supported rollback should be easy and almost automatic
    • Bugs have reduced impact as not entire community
    • Makes it easier to do A/B testing which is useful for checking things like performance
  • Immune System
    • Measure key metrics of new system (preferably with just Canary users)
    • If any metrics get worse (e.g. performance, number of orders processed, revenue) then rollback
    • For this to work you need good monitoring in place
  • Use Feature Toggles/Branching by Abstraction to enable/disable features
    • Need to test all the combinations of features you expect to support in production
    • Could combine with random testing to turn different combinations of features on/off
    • Makes rollback of just one feature easy
    • Important to clean up and remove old unused code paths
    • Be aware of impact on things like coverage reports after a code path becomes unused
  • Dark Launching
    • Hide a feature in a production system without exposing its UI
    • Gradually expose more functionality (e.g. connectivity, sending messages) to test impact on system before eventually turning on UI
  • To measure effectiveness of release process answer following 2 questions
    • How long would it take to validate and release a one line change?
    • If your data centre blew up how long would it take to redeploy?
  • Books:

Agile and Craftsmanship Manifestos combined

June 10, 2010

In the interest of maximising signal I’ve combined the core of the Agile and Software Craftsmanship manifestos together in one place. I’m working on a project to better understand and visualise the interactions between all of these values, but until that is done…

Well-crafted software
Working software
Comprehensive documentation

Steadily adding value
Responding to change
Following a plan

Community of professionals
Individuals and interactions
Processes and tools

Productive partnerships
Customer collaboration
Contract negotiation

Applying Medical Training Ideas to Software

September 5, 2009

This post was prompted in part by Corey Haines’ idea for using a dental school model to provide hands on training for software apprentices and various other discussions comparing software development to other established professions.

Firstly a bit of background – my partner works at the Royal College of Physicians in London. So I’ve been grilling her to understand how doctors get trained and progress through their careers. There are some similarities between medicine and software development, so there might be ideas here that we as a profession can use. At the same time software is very different from medicine so obviously we can’t just transplant the processes straight across.

I’m going to give an overview of how medical training works, then try to map that to software. Finally I’ll leave some open questions about the difficulties I can see with applying the model, which all you clever people out there can discuss in the comments.  Note that I’m using UK medical terms and titles here, and I’m not an expert so the details may be slightly off, but the general ideas should be valid.

How does it work in medicine?

A medical professional is considered to still be in training until they reach the level of Consultant. So lets take a quick look at what they go through to get there.

  • 5 years of medical school, 3 years of which is spent working in hospitals
  • 2 years as a Foundation House Officer (FHO), this time is spent working in a variety of specialties
  • 6 years as a Specialty Registrar, working in single or dual/complimentary specialties
  • Consultant

There is continuous assessment at each level to ensure the doctor has the skills to progress from, for example, FHO, to Registrar. It isn’t just a case of doing the job for a couple of years and getting automatically promoted. If doctors fail to progress after a certain amount of time/number of attempts then they are effectively ejected from the profession.

The 6 years of Specialist experience at Registrar level are tracked and assessed by a governing body to ensure the experience is relevant, and also to take into account sabbaticals, maternity, flexible working arrangements (e.g. 4 day weeks) etc. During this time they are also expected to attend accredited training courses to improve their knowledge and skills.

For each Specialty there is a Specialty Advisory Committee (SAC). The SACs meet regularly to define the training curriculum and the competencies required to become a consultant.  They also take a view on training courses, all of which are accredited by the Royal Colleges. Finally the SAC look at specific cases to assess the value of non standard experiences.

General Medical Counsel mandates that doctors at Consultant level continually develop their skills and contribute to the teaching and development of others.  Failure to do so could result in a doctor being struck off.

There is a culture of learning throughout the profession, this is a compulsory part of the job and not something that can be glossed over or ignored. Doctors do their rounds with a host of students, encouraging the students to offer up their diagnoses.  Surgical procedures typically takes place with at least 2 surgeons present. It is an opportunity for senior doctors to teach techniques to one or more juniors and for juniors to practice and demonstrate their skills with a senior doctor on hand to assess them and help out should things go wrong. Consultants are also expected to publish a peer reviewed study every couple of years which advances the knowledge and skills of the profession.

How might it work in software?

We clearly need basic training that is more vocational and aimed at people becoming software developers in the real world. Having degree programmes which mix academic study with real world work is key.  Work placements must be at places that will teach people the right skills early on. It is no good sending students into businesses that don’t use good practices. If we do we’ll end up teaching more bad habits than good. To do that we need a way of accrediting firms as suitable environments for learning our craft.  For larger enterprises it might be more appropriate to accredit at the project, team or mentor level.  Obviously the rapid rate of change in projects and people means that the accreditations will need regular reviews.

Another option for the “real world” part of students training could be to include compulsory work on significant open source projects (think GSoC).  These have the size and complexity needed to help students begin to understand what working in the real world is like. Again, the key to getting this right is ensuring top notch supervision, both to help the students get started on the projects (which can sometimes be tricky), to ensure they do more good than harm and to ensure they get the best training possible.

There is also a great deal of value in observing top quality craftsmen at work. Watching how someone more experienced works on more difficult problems (providing they carefully and slowly explain what they are doing) can provide a wider context for some of the techniques and practices. This could be done either in a classroom or on a placement.

The FHO and Registrar stages (think Apprentice & Journeyman) are really just work experience, but again we need a way of guaranteeing that experience is high quality. This means assessing both the developer and the place of work. The apprenticeship stage would be ideally done at one of Corey’s apprenticeship schools, or alternatively at accredited businesses which have the capacity to teach.

To progress from Journeyman to Master (Consultant) we really need experts in each software specialism to make the call on what is required to move up.  However, this begs the question, what are our specialisms? Testing, coding, analysis, support? What about SAs, DBAs and networking specialists, do we want to include them? Sub specialisms might include things like HCI, Real-time/embedded systems, databases, messaging, concurrency and security.  In some ways we already have specialisms, but their definitions are loose and we often have to be experts in more than one. Of course part of our problem might be that we don’t specialise enough.

Continuing education throughout our careers is clearly essential as we need to keep up with rapid changing technologies, that means we also need a commitment to teaching each other.  Many people get this already, but many more don’t. I have got a concern though, that bad ideas propogate just as fast as good ones. We possibly need to move away from our blogs (oh the irony), to reduce the noise and confusion and instead put a greater emphasis on peer reviewed, evidence based studies.  The question then is whether these studies can move fast enough or be applicable across vastly different kinds of project. After all doctors are all effectively working on different installations of the same system, we aren’t. Maybe there is a middle ground in sites like StackOverflow, which manage to draw out the current consensus from the community.

Challenges

As I’ve rambled on long enough here is a quick summary of the big problems I see, that I have no idea how to solve:

  • How do we get buy in from firms that are quite happy with the (bad) ways they are doing things?
  • How do we define our specialties, and what it means to be a master in them?
  • How do we certify and license people, and get recognition of that licensing, especially by big business?
  • How do we “strike off” developers that don’t do their bit for the community and progressing the state of the art?
  • How do we measure developers contributions to the community?
  • How do we scale this up? – The Royal Colleges were set up when Medicine was relatively small and they grew and evolved with the profession, software is already huge.
  • How do we administer it all? Where are our Royal Colleges?

This is a massive subject and the debates will rage on.  I honestly think that the professionalism problem is the biggest challenge we face as an industry. We have so much bad software out there, and so many developers who don’t have the support they need to start turning it around. We need to learn what we can from other professions and we need to learn it fast, so we solve this problem before governments try to solve it for us.

More Signal – Less Noise

September 1, 2009

Hi there – it is time to break radio silence. If I’m going to be part of this Software Craftsmanship/Professionalism movement, then it is time to start acting like it.  That means actually doing stuff and not just spending every waking hour reading the output of everyone else. So here is spur of the moment post number one. It’s late and I have to get up early, so don’t expect too much…

GeePawHill has an excellent series of posts down on the creek, and in this one he said something which struck a chord with me.  He cleaned up all the warnings in a class to ensure that “all we get is signal, without any noise”.  This is nothing new, but the more I thought about this little comment the more ways I saw we could use this. I think it could be a unifying idea to help explain many good practices. It seems close to the Lean idea of Eliminating Waste, but it goes slightly further, as it is also about maximising signal.

Essentially anything that detracts from the information you need at that precise moment is noise that we should strive to eliminate. Here are a few assorted examples

  • Compiler warnings
  • Permanantly broken tests
  • Code clutter, like using Java Iterators instead of For Each loops
  • Law of Demeter violations
  • Code Duplication (more a lack of clarity in the signal)

But what about the signal?  How do we maximise the signal we get back, and ensure it is as clear as possible?

  • CI tools
  • Continuous Testing tools like JUnit Max
  • Minimising the scope of tests
  • Clean Code

Nothing too revelatory there, but it brings together a few seemingly unrelated concepts to discover some common themes.  Over the coming days/weeks  I’m going to try explore this further. Just hope I’m not adding to the noise.