

Tests should be written from requirements. Using LLMs to write tests after the code is written (probably also by LLMs) is a huge anti-pattern:
The model looks at what the code is doing and writes tests that pass (or fail because they bungle the setup). What the model does not do, is understand what the code needs to do and write tests that ensure that functionality is present and correct.
Tests are the thing that should get the most human investment because they anchor the project to its real-world requirements. You will have tons more confidence in your vibe coded appslop if you at least thought through the test cases and built those out first. Then, whatever the shortcomings of the AI codebase, if the tests pass you can know it is doing something right.

On mars, initial habitation would be underground/in canyons to shield from the radiation. Similar plans for the moon.
Of course excavating sub-surface dwellings on another celestial body is currently about as technically feasible as time travel, but that’s the heart of the most realistic plans for long-term habitation.