Regression testing is about checking to ensure the system does what it did before a new code modification. Today, we use automated regressions tests running in continuous integration (CI) system to deliver fast feedback to the team. As the team writes new tests for newly added features at different levels (Unit, integration, etc.), many (if not most) of the tests will be added to the regressions suites.
By adding so many tests, you must make sure that your team will have a plan to keep the suites up-to-date by determining what tests will be added as you change or add functionality. When old functionality is removed and no longer relevant, you also must remember to delete or stop running the corresponding tests.
You must make sure that your regression suites are managed and organized by system functionality so that you can locate tests easily to update, delete or modify them, or use them to demonstrate (once there is a demand) how the system behaves. In my teams, we use MTM to maintain both stories and tests; one practice is to create tests in MTM and attach the testing suites to a given Feature/Story or create the tests directly in the relevant Feature/story. This practice works when testing is done within an iteration, but as you probably know, it is not maintainable over the long run when code is added and changed.
This is a challenge, a major one. Many organizations and their teams find this hard when they need to locate and execute test suites written a few months ago after the feature is complete. Automated or manual tests should always be migrated from a story-based group into a more comprehensive product functionality or business area.
To make sure you are using the practices best suitable for your business, please follow the following process and develop its based on-demand:
Decide the criteria to use when the team needs to decide what tests to add to the regression suite and which tests will not take part (which ones provide enough valuable coverage to keep).
Decide how to mark those tests to be selected using simple queries (you can use tags and predefined queries).
Decide when to add those tests; it can be at the end of each feature, at the end of each iteration, or when a story is marked as “Done.”
Unit level tests created as part of TDD should be added automatically as they are checked into the CI system and the code they are supposed to test.
Define how the team will create and maintain regression suites. One way that I love to use is to set a community (Testers and Developers) that know to understand what tests should be kept and which should be removed based on their return on investment at regular intervals.