本篇论文发表于Neurocomputing,。
本栏目着重于学习怎么样写论文摘要。从一个七步走的方法对论文摘要进行叙述,每一步需要写什么,怎么写。
多视图聚类旨在将数据点分组到其类中。利用多视图基础的互补信息来提高聚类性能是多视图聚类的主题之一。大多数现有的多视图聚类方法仅限制数据空间中的多样性和一致性,而没有考虑学习标签空间中的多样性和一致性。但是,自然要考虑学习标签矩阵中多样性的影响,因为不同的视图会生成不同的聚类标签矩阵,其中一些标签是一致的,而有些则是多样化的。为了克服这个问题,我们通过在学习的聚类标签矩阵和数据空间中限制多样性和一致性,提出了一种新颖的多视图聚类方法(DCMSC)。具体而言,在学习的标签空间中,我们将学习的通用标签矩阵放宽为一致的部分和不同的部分。同时,通过应用引入的行感知分集表示法和l 2,1范数约束分集部分,可以减少错误标签和噪声对一致部分的影响。在数据空间中,我们使用自加权策略对每个视图加权。此外,我们在频谱嵌入空间而不是原始数据空间中进行聚类,从而抑制了噪声的影响并减少了冗余信息。基于交替方向最小化的增强拉格朗日乘数(ALM-ADM)优化解决方案可以保证我们方法的收敛性。在合成数据集和真实数据集上的大量实验结果证明了我们方法的有效性。 | Multi-view clustering aims to group data points into their classes. Exploiting the complementary information underlying multiple views to benefit the clustering performance is one of the topics of multi-view clustering. Most of existing multi-view clustering methods only constrain diversity and consistency in the data space, but not consider the diversity and consistency in the learned label space. However, It is natural to take the impacts of diversity in the learned label matrix into consideration, because different view would generate different clustering label matrix, in which some labels are consistent and some are diverse. To overcome this issue, we propose a novel multi-view clustering method (DCMSC) by constraining diversity and consistency in both the learned clustering label matrix and data space. Specifically, in the learned label space, we relax the learned common label matrix into consistent part and diverse part. Meanwhile, by applying an introduced row-aware diversity representation and l 2,1-norm to constrain diverse part, wrong-labels and the influences of noises on the consistent part are reduced. In the data space, we weight each view by using a self-weight strategy. Furthermore, we conduct clustering in spectral embedded spaces instead of original data spaces, which suppresses the effect of noises and decreases redundant information. An augmented Lagrangian multiplier with alternating direction minimization (ALM-ADM) based optimization solution can guarantee the convergence of our method. Extensive experimental results on both synthetic datasets and real-world datasets demonstrate the effectiveness of our method. |
第一步: 交代研究背景 |
Multi-view clustering aims to group data points into their classes. Exploiting the complementary information underlying multiple views to benefit the clustering performance is one of the topics of multi-view clustering. |
第二步: 概括当前方法 |
Most of existing multi-view clustering methods only constrain diversity and consistency in the data space, but not consider the diversity and consistency in the learned label space. |
第三步: 一般介绍现有方法的不足,论文给出的一些解决办法。 |
However, It is natural to take the impacts of diversity in the learned label matrix into consideration, because different view would generate different clustering label matrix, in which some labels are consistent and some are diverse. Most of existing multi-view clustering methods only constrain diversity and consistency in the data space, but not consider the diversity and consistency in the learned label space. However, It is natural to take the impacts of diversity in the learned label matrix into consideration, because different view would generate different clustering label matrix, in which some labels are consistent and some are diverse. |
第四步: 提出当前的方法 | To overcome this issue, we propose a novel multi-view clustering method (DCMSC) by constraining diversity and consistency in both the learned clustering label matrix and data space. |
第五步: 在提出论文的方法之后,需要进行对自己提出的方法的大致的介绍 | Specifically, in the learned label space, we relax the learned common label matrix into consistent part and diverse part. Meanwhile, by applying an introduced row-aware diversity representation and l 2,1-norm to constrain diverse part, wrong-labels and the influences of noises on the consistent part are reduced. In the data space, we weight each view by using a self-weight strategy. Furthermore, we conduct clustering in spectral embedded spaces instead of original data spaces, which suppresses the effect of noises and decreases redundant information. |
第六步: 第五步进行了理论上的阐述。这一步呢,通常是对提出的算法怎么样实现优化的一句话或者两句话。不能太长,因为有字数限制。(可有,也可以没有,视具体论文而定) | An augmented Lagrangian multiplier with alternating direction minimization (ALM-ADM) based optimization solution can guarantee the convergence of our method. |
第七步: 简要介绍一下实验,这个比较的套路,一般都是这个套路。 | Extensive experimental results on both synthetic datasets and real-world datasets demonstrate the effectiveness of our method. |
摘要解读
第一步: 交代背景:多视角数据的普遍性和重要性
第二步: 概括当前方法 。
第三步: 一般介绍现有方法的不足
第四步: 提出当前的方法
第五步: 在提出论文的方法之后,需要进行对自己提出的方法的大致的介绍
第六步: 第五步进行了理论上的阐述。这一步呢,通常是对提出的算法怎么样实现优化的一句话或者两句话。不能太长,因为有字数限制。
第七步: 简要介绍一下实验,这个比较的套路。
以上就是大致的一个流程,我也正在学习,若有不足请各位耐心支出。非常感谢。
一般的摘要都会遵循这七个步骤,不同的步骤之间可能会融合到一块进行书写,在我们自己进行书写摘要的时候,可以参照这个步骤。如果自己在某个步骤实在想不出来,就暂时空下来。