Please enable JavaScript.
Coggle requires JavaScript to display documents.
Neural Ranking Survey (4 Model Architecture (划分方式1:对称/非对称 (Asymmetric…
Neural Ranking Survey
4 Model Architecture
划分方式1:对称/非对称
Symmetric(输入s和t同质。s和t交换位置不影响最终输出)
Siamese networks(Representation-focused)。例如DSSM
Symmetric interaction networks(Interaction-focused)。例如ARC-II
Asymmetric (输入s和t异质)
Query split。例如DRMM
Document split。例如HiNT
Joint split。例如DeepRank
One-way attention
划分方式2:s和t是否有interaction
Representation-focused
Interaction-focused
划分方式3:是否利用了多粒度或层次级别的特征
Single-granularity
Multi-granularity。例如
1 Introduction
Neural ranking的优势
可直接从raw text中学习特征,避免了手工设计特征的许多局限性
模型有足够的capacity建模相关性匹配这一复杂任务
回顾了neural ranking的发展历程
从第一个成功的DSSM(2013)开始
6 Model Empirical Comparison
On Ad-hoc Retrieval(s和t异质)
传统模型如BM25简单,但已经能有较好表现。strong baselines
asymmetric, interaction-focused, multi-granularity architecture can work better on the ad-hoc retrieval tasks
数据量越大,neural ranking方法比传统方法的优势也越大
On QA(s和t同质)
与ad-hoc检索不同。对称模型更多,因为QA问题s和t的同质性。
no clear winner between the representation-focused architecture and the interaction-focused architecture on QA tasks
数据量越大,neural ranking方法比传统方法的优势也越大
7 Trending Topics
Learning with External Knowledge(与entity结合)
...
5 Model Learning
Learning objective
Pointwise
Pairwise
Listwise
Multitask
Training Strategies
Supervised(label完全人工标注)
Weakly supervised(标注完全由现有算法如BM25自动生成)
Semi-supervised(小部分标签数据,和大部分不带标签的数据)【一个例子是用小部分标注数据fine-tune weak supervise的模型】
3 A Unified Model Formulation
2 Major Applications