Hello team,
First of all, thank you for the great work on this project — it's truly impressive.
I do, however, have a question regarding the overall approach. I noticed that instead of directly addressing the task as a Mispronunciation Detection and Diagnosis (MDD) problem, the current method relies on comparing phone sequences obtained via Speech2IPA.
Would it be more effective to frame this task directly as an MDD problem? One idea could be to treat the current model as a backbone for extracting speech features and an IPA-converted reference transcript using the model's existing vocabulary. This might offer a more targeted and interpretable approach to pronunciation feedback.
Here is an example of an MDD model.
