Figure 1 (IMAGE)
Caption
Fig. 1. Directional graph embedding captures disruptiveness. Our embedding approach leverages the entire network structure to estimate the disruptiveness of each paper. This approach separately represents the citing and cited features of papers (see Materials and Methods for a detailed explanation of the algorithm). (A) First, we generate random walks (blue arrow) on the citation network. (B) Our model aims to learn two vectors (“future” and “past”) for each paper that can be used to accurately predict “what comes before” (future vector) and “what comes next” (past vector) in the random walk trajectories. (C) As a result, future vector f approaches descendant papers vectors whereas past vector p approaches the antecedent papers vectors. (D) For a developing paper, the distance between the vectors representing antecedent works and descendant works is close in the embedding space because of the large reliance of descendant works on antecedent works. This makes the distance between future vector f and past vector p becomes closer. (E) For the disruptive paper, on the other hand, the distance between future vector and past vector becomes greater as the fewer connections between antecedent papers and descendant papers make their representation vector far away in the latent space.
Credit
Munjung Kim
Usage Restrictions
Credit Munjung Kim
License
Original content