The more tests you have on your codebase, the more that LLM based autopilots can detect its own errors as it proposes changes.
A passing test doesn't tell you that it definitely works correctly.
Maybe it fails in a way that you haven't encountered before or captured in a test.
A failing test (or a failing compile) almost certainly means something is broken.
A generally useful pattern, especially if you'll have more automated assistance: smoke tests in every random direction to make it more likely a non-viable thing is detected.
Tests get more important... and also LLMs make them considerably easier to write.