作者: Vassilios Digalakis , Dimitris Oikonomidis
DOI:
关键词:
摘要: In this work we build language models using three different training methods: n-gram, class-based and maximum entropy models. The main issue is the use of stem information to cope with very large number distinct words an inflectional language, like Greek. We compare both perplexity word error rate. also examine thoroughly differences on specific subsets words.