In a conversation with Mark Jason Dominus and Pozorvlak on Twitter about GHC compile error messages I realised there was a common pattern of problem:
Compile errors are confusing, but they are confusing in predictable ways. The same error message that is completely unintuitive to a human will tend to be emitted in the same pattern of error.
Writing good error messages is hard, but this is the sort of thing which can be debugged better by a computer than a human. A simple bayesian text classifier can probably map these error messages extremely well to a diagnostic suggestion, and sometimes all you need is those suggestions to put you on the right path.
Moreover we can crowdsource gathering the data.
Here is a plan. A simple wrapper script which you can alias to any program you like and it will execute that program, passing all the arguments through.
If it ever gets a non-zero exit code it looks at the output of stderr and attempts to run it through a classifier. It then says “Hey, I think this could be one of these problems”.
If soon after it sees you run the program again with exactly the same arguments and it now gets a success, it says “Great, you fixed it! Was it one of the errors we suggested? If not, could you provide a short diagnostic?” and submits your answer back to a central service. The service then regularly builds machine learning models (one per base command) which it ships back to you on demand / semi-regularly.
You need some additional functionality to prevent well-poisoning and similar, but I think the basic concept wouldn’t be too hard to get something up and running with.
I’m not going to do it though – I’ve more than enough to do, and this doesn’t actually help with any problems I currently have regularly. If anyone wants to take the idea and run with it, please do so with my blessing.