Maximizing Team Efficiency: Utilizing Periodic Check-ins and Data Analysis for Modularity Tune-Ups

This message was imported from the Ruby/Rails Modularity Slack server. Find more info in the import thread.

Message originally sent by slack user U70VMMV37TJ

Modularity Tune-Ups for Teams:
I’m exploring the idea of a periodic check-ins with each team and showing them a list of easy-wins or possibly high-impact wins for the packs that they own. Part of this could be some graphs and visualizations if they lead to actionable behavior. I don’t want to be creating pretty graphs that get tossed into a file folder. Imagine a financial advisor that meets with you but then there’s no change on your end.

Candidate recommendations:
Hypothesis 1: detect missing public APIs that others would want on your packs (based on in-bound privacy violations)
Hypothesis 2: detect interesting architecture violations (mostly up) from your packs
Hypothesis 3: detect trends in new privacy violations into the pack

I’m curious if anyone has done something like this.

Message excluded from import.

Message originally sent by slack user U70VMMV37TJ

I haven’t been tracking new dependency violations. I’ve assumed that new ones are probably rare, but would need to check. Besides, if a pack needs to use another pack (either public or private interface), I wouldn’t stop them.

Message excluded from import.

Message excluded from import.

Message originally sent by slack user U7213XMGS3H

I think this is a good idea. We’re not doing this, but we are looking at trends. we track changes to various json artifacts about how our packs interact and those can be analyzed over time

Message originally sent by slack user U7213XMGS3H

I think one use case for this kind of tool would be creating visibility for conversations your team should be having