LANGUAGE MODELS ARE REALISTIC TABULAR DATA GENERATORS
Vadim Borisov
University of Tubingen, T ¨ ubingen, Germany
SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT
REASONING IN LANGUAGE MODELS
CoT를 쓰는데 여러 의견을 들어보고 가장 많은 대답을 선택하면 성능이 더 좋아지더라
Xuezhi Wang
Google research
LIFT: Language-Interfaced Fine-Tuning
for Non-Language Machine Learning Tasks
Tuan Dinh
University of Wisconsin-Madison, USA
Chain-of-Thought Prompting Elicits Reasoning
in Large Language Models
문제 푸는 방법을 알려주는 Prompting 방식으로 few shot을 주면 성능이 오른다. 특히 큰 LLM에서 효과적이다.
Jason Wei
Google research
Training language models to follow instructions with human feedback
GPT를 human preference data로 finetuning 하는 방법
Prefix-Tuning: Optimizing Continuous Prompts for Generation
x 앞에 continuous vector(prefix)를 붙여서 downstream task로 finetuning한 다음 학습된 prefix를 항상 x 앞에 붙여서 inference 하는 PEFT.
Xiang Lisa Li
Stanford University
LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
self attention 가중치들에 병렬로 아주 적은 가중치만 붙여서 얘네만 finetuning하면 그냥 finetuning하는거만큼 좋은 성능을 내더라.
Edward Hu
Microsoft
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
RAG의 원조. retrieval model인 DPR와 generator인 BART를 묶어서 end to end로 x,y를 학습
Patrick Lewis
FAIR
Language Models are Few-Shot Learners
LLM의 Few shot Learning을 잘 사용하면 거의 모든 task를 잘할 수 있음
OpenAI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
양쪽 방향으로 학습하는 Transformer
Improving Language Understanding by Generative Pre-Training
LLM의 pretraining의 중요성을 강조
OpenAI
Attention Is All You Need
Language Model 판도를 바꾼 모델. Self Attention의 강력한 성능을 증명.
Ashish Vaswani
Google Brain
Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection
Masked Convolution으로 Reconstruction을 학습하는 방식의 Anomaly Detection
Nicolae-Cat˘ alin Ristea
University Politehnica of Bucharest, Romania
CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows
Normalizing Flow에 Positional Encoding을 추가한 Anomaly Detection
Denis Gudovskiy
Panasonic AI Lab, USA
Fully Convolutional Cross-Scale-Flows for Image-based Defect Detection
Multi Scale Normalizing Flow 방식의 Anomaly Detection
Marco Rudolph
Leibniz University Hannover, Germany
Anomaly Detection via Reverse Distillation from One-Class Embedding
Reverse Distillation 방식의 Anomaly Detection
H Deng
Department of Electrical and Computer Engineering, University of Alberta
Towards Total Recall in Industrial Anomaly Detection
Pretrained Patch Feature에 kNN을 적용한 Anomaly Detection
Karsten Roth
Amazon AWS
SSD: A UNIFIED FRAMEWORK FOR SELFSUPERVISED OUTLIER DETECTION
Contrastive Learning으로 학습한 Feature에 Clustering과 Mahalanobis Distance를 적용한 Anomaly Detection
Vikash Sehwag
Princeton University
EXPLAINABLE DEEP ONE-CLASS CLASSIFICATION
Inversible Convolution을 사용하여 Explainability를 강조한 Anomaly Detection
Philipp Liznerski
ML group, Technical University of Kaiserslautern, Germany
FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows
Normalizing Flow의 b,s 부분을 2d Convolution으로 대체한 Anomaly Detection
Jiawei Yu
SenseTime Research
Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows
최초 Normalizing Flow 방식의 Anomaly Detection
Marco Rudolph
Leibniz University Hanover
Divide-and-Assemble: Learning Block-wise Memory for Unsupervised Anomaly Detection
작은 조각으로 나누어 Anomaly Detection을 수행한 뒤 다시 합치는 방식
Jinlei Hou
Hikvision Research Institute
A Hierarchical Transformation-Discriminating Generative Model for Few Shot Anomaly Detection
Transformed Patch Feature를 사용한 Self Supervised Learning 방식의 Anomaly Detection
Shelly Sheynin
Facebook AI Research
LEARNING AND EVALUATING REPRESENTATIONS
FOR DEEP ONE-CLASS CLASSIFICATION
Self Supervised Learning과 One Class Classification 방식을 결합한 Anomaly Detection
Kihyuk Sohn
Google Cloud AI
Learning Unsupervised Metaformer for Anomaly detection
General Image Reconstruction을 학습한 모델을 활용하는 Anomaly Detection
Jhih-Ciang Wu
Institute of Information Science, Academia Sinica, Taiwan