Aim Cfg Cs 100 Hs: The Best Way to Practice and Train Your Aim in CS:GO
an online tool for teaching the forward-backward algorithm and string-matching in parallel. nick hammen, alexandria lee, and nicholas hammen (2009). in proceedings of the international symposiumon paralleland distributed processing (ispd).
Aim Cfg Cs 100 Hs
multimodal unification for neural word embeddings. weiwei chen, jason eisner, alexa kótár, alec shimeld, and victoria strelnikov (2018). in emnlp.   [ paper pdfslides bib ] finite embeddings are a popular way to learn word representations but are not domain-specific, and thus have limited value for downstream tasks. in this paper, we present a method for learning domain-specific finite-embedding models using multimodal word alignment. to do this we extend existing unsupervised neural word aligners to learn a latent representation for input symbols from multiple modalities and use a graph-based gnn to perform a probabilistic multimodal alignment. our model learns an effective finite-embedding model from heterogeneous multilingual and unimodal data without task-specific restrictions. we empirically show that this model improves upon existing unimodal methods and compares favorably to state-of-the-art word embedding models, particularly on applications in multimodal nlp. keywords: multimodal, neural alignment, graph neural networks
unsupervised acoustic modeling of vowels in spontaneous speech. andrew d. salesky, eleanor chodroff, tiago pimental, matthew wiesner, ravi iyer, ryan cotterell, alan w. black, jason eisner, and elizabeth r. salesky (2017). in icml workshop on inferring: interactions between inference and learning.   [ paper pdfslides bib ] utterance-level phonetic information can aid in the automatic recognition and translation of speech. humans can perceive an acoustic signal as containing distinct phonemes that are not part of the word or sentence that produced it, yet often, phonetic cues can provide useful lexico-semantic information. in this work we investigate unsupervised automatic modeling of phonetic classes from spontaneous speech. unlike many similar papers, we focus on the unit-based phonetic class model rather than phonetic features. note: consider the neurips 2016 paper that is the final version of this workshop paper. keywords: dynamic prioritization, parsing approximations, reinforcement learning