This is neat. I've used FizzBee and TLA+ for model checking. Being able to test the implementation would be nice.
How is this different from test case generation in TLA+?
Glad you have tried FizzBee before. Do you have any feedback on it?
With TLA+, I mostly see papers and example projects that typically implement model based trace checking solutions in TLA+.
While it works, usually, it will clutter the main code (SUT) with tracing library calls. And in some papers, you'll need to create a separate modified version of spec with the tracing spec.
MongoDB published a paper a while ago comparing model based testing and model based trace checking.
I'll soon list more details.
This is neat. I've used FizzBee and TLA+ for model checking. Being able to test the implementation would be nice. How is this different from test case generation in TLA+?
Glad you have tried FizzBee before. Do you have any feedback on it?
With TLA+, I mostly see papers and example projects that typically implement model based trace checking solutions in TLA+.
While it works, usually, it will clutter the main code (SUT) with tracing library calls. And in some papers, you'll need to create a separate modified version of spec with the tracing spec.
MongoDB published a paper a while ago comparing model based testing and model based trace checking. I'll soon list more details.
Using Python syntax makes it more accessible.
Thanks. Please give it a try, and let me know if you have any issues. I'd be happy to help.