Abstract Spurred by causal structure learning (CSL) ability to reveal the cause–effect connection, significant research efforts have been made to enhance the scalability of SAM-E 400MG CSL algorithms in various artificial intelligence applications.However, less effort has been made regarding the stability and the interpretability of CSL algorithms.Thus, this work proposes a self-correction mechanism that embeds domain knowledge for CSL, improving the stability and accuracy even in low-dimensional but high-noise environments by guaranteeing a Architectural Blocks meaningful output.The suggested algorithm is challenged against multiple classic and influential CSL algorithms in synthesized and field datasets.Our algorithm achieves a superior accuracy on the synthesized dataset, while on the field dataset, our method interprets the learned causal structure as a human preference for investment, coinciding with domain expert analysis.