About
home
서재
home

논문 리뷰

“내가 더 멀리 보았다면 이는 거인들의 어깨 위에서 있었기 때문이다.” - 아이작 뉴턴
Search
키워드
발표년도
Conference
제목
Full 제목
한줄 요약
저자
단체
중요도
LLM
Tabular Task
2023
ICLR
LANGUAGE MODELS ARE REALISTIC TABULAR DATA GENERATORS
Vadim Borisov
University of Tubingen, T ¨ ubingen, Germany
⭐️⭐️⭐️⭐️⭐️
LLM
Prompt Engineering
2023
ICLR
SELF-CONSISTENCY IMPROVES CHAIN OF THOUGHT REASONING IN LANGUAGE MODELS
CoT를 쓰는데 여러 의견을 들어보고 가장 많은 대답을 선택하면 성능이 더 좋아지더라
Xuezhi Wang
Google research
⭐️⭐️⭐️⭐️⭐️
LLM
Tabular Task
2022
NeurIPS
LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
Tuan Dinh
University of Wisconsin-Madison, USA
⭐️⭐️⭐️⭐️⭐️
LLM
Prompt Engineering
2022
NeurIPS
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
문제 푸는 방법을 알려주는 Prompting 방식으로 few shot을 주면 성능이 오른다. 특히 큰 LLM에서 효과적이다.
Jason Wei
Google research
⭐️⭐️⭐️⭐️⭐️
LLM
RLHF
2022
NeurIPS
Training language models to follow instructions with human feedback
GPT를 human preference data로 finetuning 하는 방법
⭐️⭐️⭐️⭐️⭐️
LLM
PEFT
Promp Tuning
2021
IJCNLP
Prefix-Tuning: Optimizing Continuous Prompts for Generation
x 앞에 continuous vector(prefix)를 붙여서 downstream task로 finetuning한 다음 학습된 prefix를 항상 x 앞에 붙여서 inference 하는 PEFT.
Xiang Lisa Li
Stanford University
⭐️⭐️⭐️⭐️⭐️
LLM
PEFT
2021
ICLR
LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS
self attention 가중치들에 병렬로 아주 적은 가중치만 붙여서 얘네만 finetuning하면 그냥 finetuning하는거만큼 좋은 성능을 내더라.
Edward Hu
Microsoft
⭐️⭐️⭐️⭐️⭐️
LLM
RAG
2020
NeurIPS
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
RAG의 원조. retrieval model인 DPR와 generator인 BART를 묶어서 end to end로 x,y를 학습
Patrick Lewis
FAIR
⭐️⭐️⭐️⭐️⭐️
LLM
2020
NeurIPS
Language Models are Few-Shot Learners
LLM의 Few shot Learning을 잘 사용하면 거의 모든 task를 잘할 수 있음
OpenAI
⭐️⭐️⭐️⭐️
LLM
2019
NAACL
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
양쪽 방향으로 학습하는 Transformer
⭐️⭐️⭐️⭐️
LLM
2018
Improving Language Understanding by Generative Pre-Training
LLM의 pretraining의 중요성을 강조
OpenAI
⭐️⭐️⭐️⭐️
LLM
2017
NeurIPS
Attention Is All You Need
Language Model 판도를 바꾼 모델. Self Attention의 강력한 성능을 증명.
Ashish Vaswani
Google Brain
⭐️⭐️⭐️⭐️⭐️
Anomaly Detection
2022
CVPR
Self-Supervised Predictive Convolutional Attentive Block for Anomaly Detection
Masked Convolution으로 Reconstruction을 학습하는 방식의 Anomaly Detection
Nicolae-Cat˘ alin Ristea
University Politehnica of Bucharest, Romania
Anomaly Detection
2022
WACV
CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows
Normalizing Flow에 Positional Encoding을 추가한 Anomaly Detection
Denis Gudovskiy
Panasonic AI Lab, USA
Anomaly Detection
2022
WACV
Fully Convolutional Cross-Scale-Flows for Image-based Defect Detection
Multi Scale Normalizing Flow 방식의 Anomaly Detection
Marco Rudolph
Leibniz University Hannover, Germany
Anomaly Detection
2022
CVPR
Anomaly Detection via Reverse Distillation from One-Class Embedding
Reverse Distillation 방식의 Anomaly Detection
H Deng
Department of Electrical and Computer Engineering, University of Alberta
Anomaly Detection
2022
CVPR
Towards Total Recall in Industrial Anomaly Detection
Pretrained Patch Feature에 kNN을 적용한 Anomaly Detection
Karsten Roth
Amazon AWS
Anomaly Detection
2021
ICLR
SSD: A UNIFIED FRAMEWORK FOR SELFSUPERVISED OUTLIER DETECTION
Contrastive Learning으로 학습한 Feature에 Clustering과 Mahalanobis Distance를 적용한 Anomaly Detection
Vikash Sehwag
Princeton University
Anomaly Detection
2021
ICLR
EXPLAINABLE DEEP ONE-CLASS CLASSIFICATION
Inversible Convolution을 사용하여 Explainability를 강조한 Anomaly Detection
Philipp Liznerski
ML group, Technical University of Kaiserslautern, Germany
Anomaly Detection
2021
Arxiv
FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows
Normalizing Flow의 b,s 부분을 2d Convolution으로 대체한 Anomaly Detection
Jiawei Yu
SenseTime Research
Anomaly Detection
2021
WACV
Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows
최초 Normalizing Flow 방식의 Anomaly Detection
Marco Rudolph
Leibniz University Hanover
Anomaly Detection
2021
ICCV
Divide-and-Assemble: Learning Block-wise Memory for Unsupervised Anomaly Detection
작은 조각으로 나누어 Anomaly Detection을 수행한 뒤 다시 합치는 방식
Jinlei Hou
Hikvision Research Institute
Anomaly Detection
2021
ICCV
A Hierarchical Transformation-Discriminating Generative Model for Few Shot Anomaly Detection
Transformed Patch Feature를 사용한 Self Supervised Learning 방식의 Anomaly Detection
Shelly Sheynin
Facebook AI Research
Anomaly Detection
2021
ICLR
LEARNING AND EVALUATING REPRESENTATIONS FOR DEEP ONE-CLASS CLASSIFICATION
Self Supervised Learning과 One Class Classification 방식을 결합한 Anomaly Detection
Kihyuk Sohn
Google Cloud AI
Anomaly Detection
2021
ICCV
Learning Unsupervised Metaformer for Anomaly detection
General Image Reconstruction을 학습한 모델을 활용하는 Anomaly Detection
Jhih-Ciang Wu
Institute of Information Science, Academia Sinica, Taiwan
Load more