How to Run Tests at Scale with RSpec and Packwerk?

This message was imported from the Ruby/Rails Modularity Slack server. Find more info in the import thread.

Message originally sent by slack user U70VT3MWTF5

Has there been a discussion on how to run tests at scale?

• We are currently using rspec, which unfortunately does not have parallel support (https://github.com/rspec/rspec-rails/issues/2104), we tried parallel_tests and turbo_tests but none of them worked for us.
• For each change we run all the tests.
I imagine with packwerk one could run only the tests of the packs that are dependants of the modified files. Has anyone experimented with something like this? If not we would be willing to collaborate.

We should probably have a conversation in <#C02TLU33RNW|packwerk>

Message originally sent by slack user U78DAHFQKN2

For the brute force approach, have you checked out Knapsack Pro? I use it and I’ve worked with others who use it too.

Message originally sent by slack user U70VT3MWTF5

We are currently using the parallelization option of Circle CI which splits the tests according to their durations. See https://circleci.com/docs/parallelism-faster-jobs#test-splitting-to-speed-up-pipelines

Message originally sent by slack user U78DAHFQKN2

I’ve written a library that records the dependency source files of each test that runs, and then on subsequent change sets it can compute which tests need to be re-run for that change set. Let me know if you want to discuss it further.

Message originally sent by slack user U78DAHFQKN2

It can also be used in local development to run test cases incrementally as you code and change files.

Message originally sent by slack user U78HE411QT1

Huge fan of Knapsack Pro and the support is amazing. We run it on Github Actions and it’s dramatically improved the developer experience. That said, we’ve basically given up trying to run the who test suite locally.

Message originally sent by slack user U78DAHFQKN2

Interesting, that seems to be kind of a common problem. <@U78HE411QT1> are you interested in exploring how AppMap can analyze the dependencies so you can run just a specific subset of tests for a change set? Here’s a video of how it works locally - https://www.loom.com/share/741ffb521901417580735b8c6b3e59c9 - the same strategy can be applied when running in CI, but not using Guard, obviously. In CI, the technique is to look at the files that have changed between the PR branch and the main, AppMap computes the test files that are out of date, and then you add in any new tests on that branch, and run those.

For development, as you can see from the video it’s working fine today. For CI, we are interested in working with some people to help refine the details of how to do it optimally. Please let me know if you’d like to discuss further!

Message originally sent by slack user U78HE411QT1

Thanks Kevin. To be honest, we don’t have much bandwidth to explore new tooling on this, but this looks pretty cool