Maps 4.0.0-RC1 released!

I’m happy to announce the first release candidate for Maps 4.0. Maps is a MediaWiki extension to work with and visualize geographical information. Maps 4.0 is the first major release of the extension since January 2014, and it brings a ton of “new” functionality.

First off, this blog post is about a release candidate, meant to gather feedback and not suitable for usage in production. The 4.0 release itself will be made one week from now if no issues are found.

Almost all features from the Semantic Maps extension got merged into Maps, with the notable omission of the form input, which now resides in Yaron Korens Page Forms extension. I realized that spreading out the functionality over both Maps and Semantic Maps was hindering development and making things more difficult for the users than needed. Hence Semantic Maps is now discontinued, with Maps containing the coordinate datetype, the map result formats for each mapping service, the KML export format and distance query support. All these features will automatically enable themselves when you have Semantic MediaWiki installed, and can be explicitly turned off with a new egMapsDisableSmwIntegration setting.

The other big change is that, after 7 years of no change, the default mapping service was changed from Google Maps to Leaflet. The reason for this alteration is that Google recently required obtaining and specifying an API key for its maps to work on new websites. This would leave some users confused when they first installed the Maps extension and got a non functioning map, even though the API key is mentioned in the installation instructions. Google Maps is of course still supported, and you can make it the default again on your wiki via the egMapsDefaultService setting.

Another noteworthy change is the addition of the egMapsDisableExtension setting, which allows for disabling the extension via configuration, even when it is installed. This has often been requested by those running wiki farms.

For a full list of changes, see the release notes. Also check out the new features in Maps 3.8, Maps 3.7 and Maps 3.6 if you have not done so yet.

Upgrading

Since this is a major release, please beware of the breaking changes, and that you might need to change configuration or things inside of your wiki. Update your mediawiki/maps version in composer.json to ~4.0@rc (or ~4.0 once the real release has happened) and run composer update.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

Object Orientated Lua code

During the last few weeks I’ve been refactoring some horrible Lua code. This has been a ton of fun so far, and I learned many new things about Lua that I’d like to share.

Such Horrible Code

final-rush-pro-5-largeThe code in question is that of a scripted Supreme Commander Forged Alliance Forever Map called Final Rush Pro v4. Essentially all the code resides in a single Lua file slightly over 2500 lines long. It is entirely procedural, uses global state all over, contains plenty of copy pasted code and, unsurprisingly, does not have a single test. What’s more is that at least some of the code must have been written by people not even at home with procedural programming, as there are several instances when massive if-else blocks are used rather than loops.

Much Refactoring

The high level approach I took was to identify cohesive sets of code in the huge file and move them out in dedicated files. These dedicated files would then have their dependencies explicitly defined and could be cleaned up one by one. This graph shows the lines of code of the Lua file that acts as entry point over time:

final-rush-loc

The first example can be seen in moving the “PrebuildTents” code into its own file. This code, coincidentally, nicely illustrates the copy pasting and insane use of if-else over loops. One huge issue that remains when simply moving the code like that is that it remains in global/static scope. In other words, it’s not possible to use the code in the file with two different sets of local values. I did some searching on how to idiosyncratically achieve polymorphism in Lua.

One of the first things I read through was the Object Orientated Programming pages of the Programming in Lua book. Following that approach, I created the very first version of a simple wrapper around a list of player armies. As you can see there, I wrote tests for that code (more on those tests below). I was not too happy with that approach as it does not provide nice encapsulation. After looking at the code of some of the more prominent Lua tools I came across, I decided to go with a closure based approach instead. Initially I would define a this local table, which would then get functions bound to it. I switched to returning a map at the end of the closure, which makes it more clear what the public functions are, and leaves one less local variable to worry about. (The closure is assigned to newInstance rather than just returned due to the way the import mechanism of the framework works, which is different than Lua’s native require.)

A downside of how the code in the files is organized is that you essentially need to read backwards when looking at how it is invoked. The public functions are listed at the very end of the file, with their dependencies defined before, and their dependencies defined before that. It would be nice to have the public functions more clearly visible at the top of the file, which is where you need to look for the constructor signature already.

Now the creating of cohesive sets of code is mostly done, the entry point file is down to 44 Lines of Code. It defaults some options coming from the framework/game, and then invokes a high level module that sets up the various aspects of the game, which totals 70 Lines of Code.

My next steps are further cleanup of individual sets of code, with a focus on minimizing dependencies and separating concerns. For this I’m using practices, principles and patterns which are by and large language agnostic, so I won’t get into them here. You can find the code of the new version of the map in the Final Rush Pro 5 repository on GitHub, including many small refactoring commits in the git history.

Very Environment

My first modifications to the code where with Notepad++ on Windows. While that editor provides syntax highlighting, there is no static code analysis or any of the essential things that require it, such as navigating to definitions. Hence I switched to my usual development environment, IntelliJ on Linux, using the IntelliJ Lua plugin.

While that switch to Linux made refactoring the code easier, it also prevent me from (manually) testing the code. This code, like many legacy balls of mud, binds very tightly to its framework, in this case the Supreme Commander game that only runs on Windows. While it’s often good to remove such binding, it’s not a trivial task, and not something I’d want to attempt without a fast feedback cycle.

The lack of fast feedback drove me to find a Lua testing tool to use. Several are listed on the lua-users wiki. After checking the project health of several tools, I decided to go with Busted, which I installed via LuaRocks. I then proceeded to create a wrapper for the list of players in the game (to replace code that was not only crappy but also incorrect) using Test Driven Development, resulting in a nice spec for the wrapper.

Unfortunately the same approach would not work for cleaning up most of the other code. The framework binding was just too high, and in a lot of cases, contrary to the typical scenario I’m used to (which are not games), perhaps simply the best that can be done. Hence I switched back to Windows.

On Windows I installed IntelliJ with the Lua plugin, TortoiseGit, and Busted. The latter was quite a hurdle, since my Windows administration skills are not exactly stellar. For Busted I needed to install Lua (ya really), LuaRocks and the MinGW compiler. Being able to run the tests in the IDE’s terminal was worth it though.

Wow Release

Version 5 of the map has now been released, see the release post for details on the new features.

Clean Architecture diagrams

I’m happy to release a few Clean Architecture related diagrams into the public domain (CC0 1.0).

These diagrams where created at Wikimedia Deutchland by Jan Dittrich, Charlie Kritschmar and myself for an upcoming presentation I’m doing on the Clean Architecture. There are plenty of diagrams available already if you include Onion Architecture and Hexagonal, which have essentially the same structure, though none I’ve found so far have a permissive license. Furthermore, I’m not so happy with the wording and structure of a lot of these. In particular, some incorporate more than they can chew with the “dependencies pointing inward rule”, glossing over important restrictions which end up not being visualized at all.

These images are SVGs. Click them to go to Wikimedia Commons where you can download them.

Clean Architecture Clean Architecture + Bounded Context Clean Architecture + Bounded Contexts Clean Architecture + Bounded Contexts

Maps 3.8 for MediaWiki released

I’m happy to announce the immediate availability of Maps 3.8. This feature release brings several enhancements and new features.

  • Added Leaflet marker clustering (by Peter Grassberger)
    • markercluster: Enables clustering, multiple markers are merged into one marker.
    • clustermaxzoom: The maximum zoom level where clusters may exist.
    • clusterzoomonclick: Whether clicking on a cluster zooms into it.
    • clustermaxradius: The maximum radius that a cluster will cover.
    • clusterspiderfy: At the lowest zoom level markers are separated so you can see all.
  • Added Leaflet fullscreen control (by Peter Grassberger)
  • Added OSM Nominatim Geocoder (by Peter Grassberger)
  • Upgraded Leaflet library to its latest version (1.0.0-r3) (by Peter Grassberger)
  • Made removal of marker clusters more robust. (by Peter Grassberger)
  • Unified system messages for several services (by Karsten Hoffmeyer)

Leaflet marker clusters

Goolge Maps API key

Due to changes to Google Maps, an API key now needs to be set. Upgrading to the latest version of Maps will not break the maps on your wiki in any case, as the change really is on Googles end. If they are still working, you can keep running an older version of Maps. Of course it’s safer to upgrade and set the API key anyway. In case you have a new wiki or the maps broke for some reason, you will need to get Maps 3.8 or later and set the API key. See the installation configuration instructions for more information.

  • Added Google Maps API key egMapsGMaps3ApiKey setting (by Peter Grassberger)
  • Added Google Maps API version number egMapsGMaps3ApiVersion setting (by Peter Grassberger)

Upgrading

Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

Notes: Implementing DDD, chapter 2

Notes from Implementing Domain Driven Design, chapter 2: Domains, Subdomains and Bounded Contexts (p58 and later only)

  • User interface and service orientated endpoints are within the context boundary
  • Domain concepts in the UI form the Smart UI Anti-Pattern
  • A database schema is part of the context if it was created for it and not influenced from the outside
  • Contexts should not be used to divide developer responsibilities; modules are a more suitable tactical approach
  • A bounded context has one team that is responsible for it (while teams can be responsible for multiple bounded contexts)
  • Access and identity is its own context and should not be visible at all in the domain of another context. The application services / use cases in the other context are responsible for interacting with the access and identity generic subdomain
  • Context Maps are supposedly real cool

Maps 3.7 for MediaWiki released

I’m happy to announce the immediate availability of Maps 3.7. This feature release brings some minor enhancements.

  • Added rotate control support for Google Maps (by Peter Grassberger)
  • Changed coordinate display on OpenLayers maps from long-lat to lat-long (by Peter Grassberger)
  • Upgraded Google marker cluster library to its latest version (2.1.2) (by Peter Grassberger)
  • Upgraded Leaflet library to its latest version (0.7.7) (by Peter Grassberger)
  • Added missing system messages (by Karsten Hoffmeyer)
  • Internal code enhancements (by Peter Grassberger)
  • Removed broken custom map layer functionality. You no longer need to run update.php for full installation.
  • Translation updates by TranslateWiki

Upgrading

Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

Beware that as of Maps 3.6, you need MediaWiki 1.23 or later, and PHP 5.5 or later. If you choose to remain with an older version of PHP or MediaWiki, use Maps 3.5. Maps works with the latest stable versions of both MediaWiki and PHP, which are the versions I recommend you use.

PHP Unconference Europe 2016

Last week I attended the 2016 edition of the PHP Unconference Europe, taking place in Palma De Mallorca. This post contains my notes from various conference sessions. Be warned, some of them are quite rough.

Overall impression

Before getting to the notes, I’d like to explain the setup of the unconference and my general impression.

The unconference is two days long, not counting associated social events before and afterwards. The first day started with people discussing in small groups which sessions they would like to have, either by leading them themselves, or just wanting to attend. These session ideas where written down and put on papers on the wall. We then went through them one by one, with someone explaining the idea behind each session, and one or more presenters / hosts being chosen. The final step of the process was to vote on the sessions. For this, each person got two “sticky dots” (what are those things called anyway?), which they could either both put onto a single session, or split and vote on two sessions.

One each day we had 4 such sessions, with long breaks in between, to promote interaction between the attendees.

Onto my notes for individual sessions:

How we analyze your code

Analysis and metrics can be used for tracking progress and for analyzing the current state. Talk focuses on current state.

  • Which code is important
  • Probably buggy code
  • Badly tested code
  • Untested code

Finding the core (kore?): code rank (like Google page rank): importance flows to classes that are dependent upon (fan-in). Qafoo Quality Analyzer. Reverse code rank: classes that depend on lots of other classes (fan-out)

Where do we expect bugs? Typically where code is hard to understand. We can look at method complexity: cyclomatic complexity, NPath complexity. Line Coverage exists, Path Coverage is being worked upon. Parameter Value Coverage. CRAP.

Excessive coupling is bad. Incoming and outgoing dependencies. Different from code rank in that only direct dependencies are counted. Things that are depended on a lot should be stable and well tested (essentially the Stable Dependencies Principle).

Qafoo Quality Analyzer can be used to find dependencies across layers when they are in different directories. Very limited at present.

When finding highly complex code, don’t immediately assume it is bad. There are valid reasons for high complexity. Metrics can also be tricked.

The evolution of web application architecture

How systems interact with each other. Starting with simple architecture, looking at problems that arise as more visitors arrive, and then seeing how we can deal with those problems.

Users -> Single web app server -> DB

Next step: Multiple app servers + load balancers (round robin + session caching server)

Launch of shopping system resulted in app going down, as master db got too many writes, due to logging “cache was hit” in it.

Different ways of caching: entities, collections, full pages. Cache invalidation is hard, lots of dependencies even in simple domains.

When too many writes: sharding (split data across multiple nodes), vertical (by columns) or horizontal (by rows). Loss of referential integrity checking.

Complexity with relational database systems -> NoSQL: sharding, multi master, cross-shard queries. Usually no SQL or referential integrity, though those features are already lost when using sharding.

Combination of multiple persistence systems: problems with synchronization. Transactions are slow. Embrace eventual consistency. Same updating strategies can be used for caches.

Business people often know SQL, yet not NoSQL query languages.

Queues can be used to pass data asynchronously to multiple consumers. Following data flow of an action can be tricky. Data consistency is still a thing.

Microservices: separation of concerns on service and team level. Can simplify via optimal tech stack per serve. Make things more complicated, need automated deployment, orchestration, eventual consistency, failure handling.

Boring technology often works best, especially at the beginning of a project. Start with the simplest solution that works. Hold team skills into account.

How to fuck up projects

Before the project

  • Buzzword first design
  • Mismatching expectations: huge customer expectations, no budget
  • Fuzzy ambitious vocabulary, directly into the contract (including made up words)
  • Meetings, bad mood, no eye contact
  • No decisions (no decision making process -> no managers -> saves money)
  • Customer Driven Development: customer makes decisions
  • Decide on environment: tools, mouse/touchpad, 1 big monitor or 2 small ones, JIRA, etc
  • Estimates: should be done by management

During the project

  • Avoid ALL communication, especially with the customer
  • If communication cannot be avoided: mix channels
  • Responsibility: use group chats and use “you” instead of specific names (cc everyone in mails)
  • Avoid issue trackers, this is what email and Facebook are for
  • If you cannot avoid issue trackers: use multiple or have one ticket with 2000 notes
  • Use ALL the programming languages, including PHP-COBOL
  • Do YOUR job, but nothing more
  • Only pressure makes diamonds: coding on the weekend
  • No breaks so people don’t lose focus
  • Collect metrics: Hours in office, LOC, emails answered, tickets closed

Completing the project

  • 3/4 projects fail: we can’t do anything about it
  • New features? Outsource
  • Ignore the client when they ask about the completed project
  • Change the team often, fire people on a daily basis
  • Rotate the customer’s contact person

Bonus

  • No VCS. FTP works. Live editing on production is even better
  • http://whatthecommit.com/
  • Encoding: emjois in function names, umlaut in file names. Mix encodings, also in MySQL
  • Agile is just guidelines, change goals during sprints often
  • Help others fuck up: release it as open source
  • git blame-someone-else

The future of PHP

This session started with some words from the moderator, who mainly talked about performance, portability and future adoption of, or moving away from, PHP.

  • PHP now fast enough to use many PHP libraries
  • PHP now better for long running tasks (though still no 64 bit for windows)
  • PHP now has an Abstract Syntax Tree

The discussion that followed after was primarily about the future of PHP in terms of adoption. The two languages most mentioned as competitors where Javascript and Java.

Java because it is very hard to get PHP into big enterprise, where people tend to cling to Java. A point made several times about this is that such choices have very little to do with technical sensibility, and are instead influenced by the eduction system, languages already used, newness/ hipness and the HiPPO. Most people also don’t have the relevant information to make an informed choice, and do not do the effort to look up this information as they already have a preference.

Javascript is a competitor because web based projects, be it with a backend in PHP or in another language, need more and more Javascript, with no real alternatives. It was mentioned several times that not having alternatives it bad. Having multiple JS interpreters is cool, JS being the only choice for browser programming is not.

Introduction to sensible load testing

In this talk the speaker explained why it is important to do realistic load testing, and how to avoid common pitfalls. He explained how jMeter can be used to simulate real user behavior during peak load times. Preliminary slides link.

Domain Objects: not just for Domain Driven Design

This session was hard to choose, as it coincided with “What to look for in a developer when hiring, and how to test it”, which I also wanted to attend.

The Domain Objects session introduced what Value Objects are, and why they are better than long parameter lists and passing around values that might be invalid. While sensible enough, all very basic, with unfortunately no information for me whatsoever. I’m thinking it’d have been better to do this as a discussion, partly because the speaker was clearly very inexperienced, and gave most of the talk with his arms crossed in front of him. (Speaker, if you are reading this, please don’t be discouraged, practice makes perfect.)

Performance monitoring

I was only in the second half of this session, during which two performance monitoring tools where presented. Tideways by Qafoo and Instana.

Some tweets

Maps 3.6 for MediaWiki released

I’m happy to announce the immediate availability of Maps 3.6. This feature release brings marker clustering enhancements and a number of fixes.

These parameters where added to the display_map parser function, to allow for greater control over marker clustering. They are only supported together with Google Maps.

  • clustergridsize: The grid size of a cluster in pixels
  • clustermaxzoom: The maximum zoom level that a marker can be part of a cluster
  • clusterzoomonclick: If the default behavior of clicking on a cluster is to zoom in on it
  • clusteraveragecenter: If the cluster location should be the average of all its markers
  • clusterminsize: The minimum number of markers required to form a cluster

Bugfixes

  • Fixed missing marker cluster images for Google Maps
  • Fixed duplicate markers in OpenLayers maps
  • Fixed URL support in the icon parameter

Credits

Many thanks to Peter Grassberger, who made the listed fixes and added the new clustering parameters. Thanks also go to Karsten Hoffmeyer for miscellaneous support and to TranslateWiki for providing translations.

Upgrading

Since this is a feature release, there are no breaking changes, and you can simply run composer update, or replace the old files with the new ones.

There are, however, compatibility changes to keep in mind. As of this version, Maps requires PHP 5.5 or later and MediaWiki 1.23 or later. composer update will not give you a version of Maps incompatible with your version of PHP, though it is presently not checking your MediaWiki version. Fun fact: this is the first bump in minimum requirements since the release of Maps 2.0, way back in 2012.

 

 

Is Pair Programming worth it?

Every now and then I get asked how to convince ones team members that Pair Programming is worthwhile. Often the person asking, or people I did pair programming with, while obviously enthusiastic about the practice, and willing to give it plenty of chance, are themselves not really convinced that it actually is worth the time. In this short post I share how I look at it, in the hope it is useful to you personally, and in convincing others.

Extreme Programming

The cost of Pair Programming

Suppose you are new to the practice and doing it very badly. You have one person hogging the keyboard and not sharing their thoughts, with the other paying more attention to twitter than to the development work. In this case you basically spend twice the time for the same output. In other words, the development cost is multiplied by two.

Personally I find it tempting to think about Pair Programming as doubling the cost, even though I know better. How much more total developer time you need is unclear, and really depends on the task. The more complex the task, the less overhead Pair Programming will cause. What is clear, is that when your execution of the practice is not pathologically bad, and when the task is more complicated than something you could trivially automate, the cost multiplication is well below two. An article on c2 wiki suggests 10-15% more total developer time, with the time elapsed being about 55% compared to solo development.

If these are all the cost implications you think about with regards to Pair Programming, it’s easy to see how you will have a hard time to justify it. Let’s look at what makes the practice actually worthwhile.

The cost of not Pair Programming

If you do Pair Programming, you do not need a dedicated code review step. This is because Pair Programming is a continuous application of review. Not only do you not have to put time into a dedicated review step, the quality of the review goes up, as communication is much easier. The involved feedback loops are shortened. With dedicated review, the reviewer will often have a hard time understanding all the relevant context and intent. Questions get asked and issues get pointed out. Some time later the author of the change, who in the meanwhile has been working on something else, needs to get back to the reviewer, presumably forcing two mental context switches. When you are used to such a process, it becomes easy to become blind to this kind of waste when not paying deliberate attention to it. Pair Programming eliminates this waste.

The shorter feedback loops and enhanced documentation also help you with design questions. You have a fellow developer sitting next to you who you can bounce ideas off and they are even up to speed with what you are doing. How great is that? Pair Programming can be a lot of fun.

The above two points make Pair Programming pay more than for itself in my opinion, though it offers a number of additional benefits. You gain true collective ownership, and build shared commitment. There is knowledge transfer and Pair Programming is an excellent way of onboarding new developers. You gain higher quality, both internal in the form of better design, and external, in the form of fewer defects. While those benefits are easy to state, they are by no means insignificant, and deserve thorough consideration.

Give Pair Programming a try

As with most practices there is a reasonable learning curve, which will slow you down at first. Such investments are needed to become a better programmer and contribute more to your team.

Many programmers are more introverted and find the notion of having to pair program rather daunting. My advice when starting is to begin with short sessions. Find a colleague you get along with reasonably well and sit down together for an hour. Don’t focus too much on how much you got done. Rather than setting some performance goal with an arbitrary deadline, focus on creating a habit such as doing one hour of Pair Programming every two days. You will automatically get better at it over time.

If you are looking for instructions on how to Pair Program, there is plenty of google-able material out there. You can start by reading the Wikipedia page. I recommend paying particular attention to the listed non-performance indicators. There are also many videos, be it conference tasks, or dedicated explanations of the basics.

Such disclaimer

I should note that while I have some experience with Pair Programming, I am very much a novice compared to those who have done it full time for multiple years, and can only guess at the sage incantations these mythical creatures would send your way.

Extreme Pair Programming

Extreme Pair Programming

I T.A.K.E. 2016

Last week I attended the I T.A.K.E. unconference in Bucharest. This unconference is about software development, and has tracks such as code quality, DevOps, craftsmanship, microservices and leadership. In this post I share my overall impressions as well as the notes I took during the uncoference.

Conference impression

itakeThis was my first attendance of I T.A.K.E, and I had not researched in high detail what the setup would look like, so I did not really know what to expect. What surprised me is that most of the unconference is actually pretty much a regular conference. For the majority of the two days, there where several tracks in parallel, with talks on various topics. The unconference part is limited to two hours each day during which there is an open space.

Overall I enjoyed the conference and learned some interesting new things. Some talks were a bit underwhelming quality wise, with speakers not properly using the microphone, code on slides in such a quantity that no one can read it, and speakers looking at their slides the whole time and not connecting to the audience. The parts I enjoyed most were the open space, conversations during coffee breaks, and a little pair programming. I liked I T.A.K.E more than the recent CraftConf, though less than SoCraTes, which perhaps is a high standard to set.

Keynote: Scaling Agile

Day one started with a keynote by James Shore (who you might know from Let’s Code: Test-Driven JavaScript) on how to apply agile methods when growing beyond a single team.

The first half of the talk focused on how to divide work amongst developers, be it between multiple teams, or within a team using “lanes”. The main point that was made is that one wants to minimize dependencies between groups of developers (so people don’t get blocked by things outside of their control), and therefore the split should happen along feature boundaries, not within features themselves. This of course builds on the premise that the whole team picks up a story, and not some subset or even individuals.

ScalingAgile

A point that caught my interest is that while collective ownership of code within teams is desired, sharing responsibility between teams is more problematic. The reason for this being that supposedly people will not clean up after themselves enough, as it’s not their code, and rather resort to finger-pointing to the other team(s). As James eloquently put it:

My TL;DR for this talk is basically: low coupling, high cohesion 🙂

Mutation Testing to the rescue of your Tests

During this talk, one of the first things the speaker said is that the only goal of tests is to make sure there are no bugs in production. This very much goes against my point of view, as I think the primary value is that they allow refactoring with confidence, without which code quality suffers greatly. Additionally, tests provide plenty of other advantages, such as documenting what the system does, and forcing you to pay a minimal amount of attention to certain aspects of software design.

The speaker continued to ask about who uses test coverage, and had a quote from Uncle Bob on needing 100% test coverage. After another few minutes of build up to the inevitable denunciation of chasing test coverage as being a good idea, I left to go find a more interesting talk.

Afterwards during one of the coffee breaks I talked with some people that had joined the talk 10 minutes or so after it started and had actually found it interesting. Apparently the speaker got to the actual topic of the talk; mutation testing, and presented it as a superior metric. I did not know about mutation testing before and recommend you have a look at the Wikipedia page about it if you do not know what it is. It automates an approximation of what you do in trying to determine which tests are valuable to write. As with code coverage, one should not focus on the metric though, and merely use it as the tool that it is.

Interesting related posts:

Raising The Bar

A talk on Software Craftsmanship that made me add The Coding Dojo Handbook to my to-read list.

Metrics For Good Developers

  • Metrics are for developers, not for management.
  • Developers should be able to choose the metrics.
  • Metrics to get a real measure of quality, not just “it feels like we’re doing well”
  • Measuring the number of production defects.
  • Make metrics visible.
  • Sometimes it is good to have metrics for individuals and not the whole team.
  • They can be a feedback mechanism for self improvement.

Open Space

The Open Space is a two hour slot which puts the “un” in unconference. It starts by having a market place, where people propose sessions on topics of their interest. These sessions are typically highly interactive, in the form of self-organized discussions.

Open Space: Leadership

This session started by people writing down things they associate with good leadership, and then discussing those points.

Two books where mentioned, the first being The Five Dysfunctions of a Team.

The second book was Leadership and the One Minute Manager: Increasing Effectiveness Through Situational Leadership.

Open Space: Maintenance work: bad and good

This session was about finding reasons to dislike doing maintenance work, and then finding out how to look at it more positively. My input here was that a lot of the negative things, such as having to deal with crufty legacy code, can also be positive, in that they provide technical challenges absent in greenfield projects, and that you can refactor a mess into something nice.

I did not stay in this session until the very end, and unfortunately cannot find any pictures of the whiteboard.

Open Space: Coaching dojo

I had misheard what this was about and thought the topic was “Coding Dojo“. Instead we did a coaching exercise focused on asking open ended questions.

Are your Mocks Mocking at You?

This session was spread over two time slots, and I only attended the first part, as during the second one I had some pair programming scheduled. One of the first things covered in this talk was an explanation of the different types of Test Doubles, much like in my recent post 5 ways to write better mocks. The speakers also covered the differences between inside-out and outside-in TDD, and ended (the first time slot) with JavaScript peculiarities.

Never Develop Alone : always with a partner

In this talk, the speaker, who has been doing full-time pair programming for several years, outlined the primary benefits provided by, and challenges encountered during, pair programming.

Benefits: more focus / less distractions, more confidence, rapid feedback, knowledge sharing, fun, helps on-boarding, continuous improvement, less blaming.

Challenges: synchronization / communication, keyboard hogging

Do:

  • Ping-Pong TDD
  • Time boxing
  • Multiple keyboards
  • Pay attention and remind your pair if they don’t
  • Share your thoughts
  • Be open to new ideas and accept feedback
  • Mob programming

Live coding: Easier To Change Code

In this session the presenter walked us through some typical legacy code, and then demonstrated how one can start refactoring (relatively) safely. The code made me think of the Gilded Rose kata, though it was more elaborate/interesting. The presenter started by adding a safety net in the form of golden master tests and then proceeded with incremental refactoring.

Is management dead?WMDE management

Uncle Abraham certainly is most of the time! (Though when he is not, he approves of the below list.)

  • Many books on Agile, few on Agile management
  • Most common reasons for failure of Agile projects are management related
  • The Agile Manifesto includes two management principles
  • Intrinsic motivation via Autonomy, Mastery, Purpose and Connection
  • Self-organization: fully engaged, making own choices, taking responsibility
  • Needed for self-organization: skills, T-shaped, team players, collocation, long-lived team
  • Amplify and dampen voices
  • Lean more towards delegation to foster self-organization (levels of delegation)

delegationlevels

Visualizing codebases

This talk was about how to extract and visualize metrics from codebases. I was hoping it would include various code quality related metrics, but alas, the talk only included file level details and simple line counts.