作者: Sebastian Klie , Lennart Martens , Juan Antonio Vizcaíno , Richard Côté , Phil Jones
DOI: 10.1021/PR070461K
关键词:
摘要: Since the advent of public data repositories for proteomics data, readily accessible results from high-throughput experiments have been accumulating steadily. Several large-scale projects in particular contributed substantially to amount identifications available community. Despite considerable body information amassed, very few successful analyses performed and published on this leveling off ultimate value these far below their potential. A prominent reason is seldom reanalyzed lies heterogeneous nature original sample collection subsequent recording processing. To illustrate that at least part heterogeneity can be compensated for, we here apply a latent semantic analysis by Human Proteome Organization's Plasma Project (HUPO PPP). Interestingly, despite broad spectrum instruments methodologies applied HUPO PPP, our reveals several obvious patterns used formulate concrete recommendations optimizing project planning as well choice technologies future experiments. It clear large bodies publicly noise-tolerant algorithms such holds great promise currently underexploited.