摘要: Extending linear classifiers from feature vectors to attributed graphs results in sublinear classifiers. In contrast models, the classification performance of models depends on our choice as which class we label positive and negative. We prove that expected accuracy may differ for different labelings. Experiments confirm this finding empirical accuracies small samples. These give rise flip-flop consider both labelings during training select model prediction better fits data.