Low-code platforms enable end-users to create applications using visual interfaces and formula languages, but users often struggle with writing correct formulas. We present a neurosymbolic approach for automatically repairing formulas in low-code environments that combines the strengths of neural learning and symbolic reasoning. Our method leverages both the semantic understanding capabilities of neural networks and the precise logical reasoning of symbolic systems to identify and fix errors in formulas. The approach first uses neural components to understand the intent behind potentially incorrect formulas, then employs symbolic reasoning to generate syntactically and semantically correct repairs. We develop a comprehensive framework that handles various types of formula errors including syntax errors, logical inconsistencies, and semantic mismatches. The system is designed to work across different low-code platforms and formula languages, making it broadly applicable. Through extensive evaluation on real-world formula datasets from multiple low-code platforms, we demonstrate that our neurosymbolic approach significantly outperforms purely neural or purely symbolic methods, achieving higher repair accuracy while maintaining explainability.
Neurosymbolic repair for low-code formula languages
October 1, 2022
R. Bavishi*, H. Joshi*, J. Cambronero, A. Fariha, S. Gulwani, V. Le, I. Radiček, G. Verbruggen | OOPSLA 2022