Aaron Traylor

 Aaron Traylor

Hi! I'm Aaron.








 Contact



Bio

I was a PhD student at Brown University. I worked with Professor Ellie Pavlick in the LUNAR (Language UNderstanding And Representation) Lab, and I was also advised by Professor Roman Feiman of the Brown Language and Thought Lab.

I studied the reasoning capabilities of language models. Mostly pretty small ones. And a lot more about developmental psychology than I thought I would going in!!

I am now a data scientist.

In May 2022 and May 2023, I was a summer intern for Microsoft Turing.

In the summer of 2019, I worked as an intern at Megagon Labs.

Previously, I worked in Professor Andrew McCallum's Information Extraction and Synthesis Laboratory and collaborated with Nicholas Monath and Rajarshi Das.

Research Interests (Now)

If you've found this page, you likely are interested in competitive Pokémon. This game is unique among strategy games for many reasons, as it combines all of long-term planning, simultaneous action selection/imperfect information, and probability management, but it is mainly unique because of its emphasis on the "teambuilding" phase, called "deckbuilding" or "draft phase" in other games: both players select their action space within the game, and afterwards make actions in the environment. The draft phase is common in other games that AI systems have successfully played (Stratego, League of Legends, DotA 2, Honor of Kings), but in Pokémon, the draft phase is scaled up to an astronomically large state space (roughly 10^200), meaning the game will require novel research before formalizing it as a problem that can be solved with computer science, let alone before creating a system with superhuman strength. And what does it really mean to be superhuman, anyway? I can't tell you that I have an answer to that.

My research interests lie mostly at the intersection of competitive Pokémon alongside computer science and cognitive science, but also with regards to economics (i.e. behavioral game theory), mental health, neuroscience, or really anything you can think of. It's unbelievable how much you don't know about the game you've been playing all your life.

If you are curious or have more questions, please feel free to reach out to the email at the top of this page. Unfortunately, I can't commit to responses or collaborations at this time– I barely have enough time to play competitive Pokémon, let alone work on research ^_^' But I'm rooting for you, and I hope you find out more about the beauty of this game.

Research Interests (Back then)

My research was focused around how humans, langauge models, and other machine learning systems use logic. Logical reasoning is fundamental to human cognition, and humans apply it flexibly in a wide variety of contexts. How would a model have to behave for us to say that it has a similar capacity for logical reasoning or inference, especially if it does not explicitly represent logic symbolically? And, what is the fundamental source of logic in the human brain— are logical capabilities learned or innate? I approach these questions from a computational perspective, a philosophical perspective, and from a human developmental perspective.

I am interested in natural language understanding (NLU), computational cognitive science, neuro-symbolic frameworks, textual inference, reinforcement learning, and representation learning.

Education

Brown University. 2018-2024. PhD Candidate. Computer Science.
University of Massachusetts Amherst. 2014-2018. BS. Computer Science.
My mom and dad. Age 0 to now. They've sure taught me a lot.

Publications

Aaron Traylor, Roman Feiman, and Ellie Pavlick. Can Neural Networks Learn Implicit Logic from Physical Reasoning? EMNLP 2023. PDF.

Brandon Prickett, Aaron Traylor, Joe Pater. Learning reduplication with a neural network that lacks explicit variables. Journal of Language Modelling 2022. PDF (download).

Aaron Traylor, Roman Feiman, and Ellie Pavlick. AND does not mean OR: Using Formal Languages to Study Language Models’ Representations. ACL 2021. PDF.

Aaron Traylor, Roman Feiman, and Ellie Pavlick. Transferring Representations of Logical Connectives. NALOMA workshop at ACL 2020. PDF.

Nikita Bhutani, Aaron Traylor, Chen Chen, Xiaolan Wang, Behzad Golshan, Wang-Chiew Tan. SAMPO: Unsupervised Knowledge Base Construction for Opinions and Implications. AKBC 2020. PDF.

Derek Tam, Nicholas Monath, Ari Kobren, Aaron Traylor, Rajarshi Das, Andrew McCallum. Optimal transport-based alignment of learned character representations for string similarity. ACL 2019. PDF

Aaron Traylor, Chen Chen, Behzad Golshan, Xiaolan Wang, Yuliang Li, Yoshihiko Suhara, Jinfeng Li, Cagatay Demiralp, Wang-Chiew Tan.Enhancing Review Comprehension with Domain-Specific Commonsense. arXiv preprint. PDF.

Brandon Prickett, Aaron Traylor, and Joe Pater. Seq2Seq Models With Dropout Can Generalize Reduplication. SIGMORPHON at EMNLP 2018. PDF

Aaron Traylor*, Nicholas Monath*, Rajarshi Das, and Andrew McCallum. Learning String Alignments for Entity Aliases AKBC WS at NIPS 2017. (* equal contribution) [PDF]

Haw-Shiuan Chang, Abdurrahman Munir, Ao Liu, Johnny Tian-Zheng Wei, Aaron Traylor, Ajay Nagesh, Nicholas Monath, Patrick Verga, Emma Strubell, and Andrew McCallum. Extracting Multilingual Relations under Limited Resources: TAC 2016 Cold-Start KB construction and Slot-Filling using Compositional Universal Schema. NIST TAC KBP Workshop 2016. Notebook version [pdf]