Cristian Consonni bio photo

Cristian Consonni

Ph.D. in Computer Science, free software activist, physicist and storyteller

Email Twitter Facebook LinkedIn Github Stackoverflow keybase

Dataset: WikiLinkGraphs' Redirects

This dataset contains redirects in Wikipedia, i.e. alias names for Wikipedia articles, extracted by processing Wikimedia’s history dumps for the languages de, en, es, fr, it, nl, pl, ru, sv.

WikiLinkGraphs. This dataset is part of the WikiLinkGraphs family, a collection of datasets extracted from Wikipedia history dumps. See the other datasets:


  • page_id: an integer, the page identifier used by MediaWiki. This identifier is not necessarily progressive, there may be gaps in the enumeration;
  • page_title: a string, the title of the Wikipedia article;
  • revision_id: an integer, the identifier of a revision of the article, also called a permanent id, because it can be used to link to that specific revision of a Wikipedia article;
  • revision_parent_id: an integer, the identifier of the parent revision. In general, each revision as a unique parent; going back in time before 2002, however, we can see that the oldest articles present non-linear edit histories. This is a consequence of the import process from the software previously used to power Wikipedia, MoinMonWiki, to MediaWiki;
  • revision_timestamp: date and time of the edit that generated the revision under consideration;
  • revision_minor: a boolean flag, with value 1 if the edit that generated the current revision was marked as minor by the user, 0 otherwise;
  • a string, the page to which the redict points;
  • redirect.tosection: a string, the anchor text of the wikilink;


Extract of the file enwiki.redirects.20180301.csv.gz in enwiki/20180301/:

10,AccessibleComputing,862220,233192,2002-02-25T15:43:11Z,1,Accessible Computing,
10,AccessibleComputing,56681914,15898945,2006-06-03T16:55:41Z,1,Computer accessibility,
10,AccessibleComputing,74466685,56681914,2006-09-08T04:16:04Z,0,Computer accessibility,
10,AccessibleComputing,133452289,133180268,2007-05-25T17:12:12Z,1,Computer accessibility,
10,AccessibleComputing,381202555,381200179,2010-08-26T22:38:36Z,1,Computer accessibility,
10,AccessibleComputing,631144794,381202555,2014-10-26T04:50:23Z,0,Computer accessibility,
10,AccessibleComputing,767284433,631144794,2017-02-25T00:30:28Z,0,Computer accessibility,


This dataset can be downloaded in two different ways:

HTTP (preferred method)

You can find the dataset on:

You can download the dataset with the following command:

dataset='wikilinkgraphs-redirects'; adate=20180301; \
langs=( 'dewiki' 'enwiki'  'eswiki'  'frwiki'  'itwiki'  'nlwiki'  'plwiki'  'ruwiki' 'svwiki' ); \
for lang in "${langs[@]}"; do
  lynx \
    -dump \
    -listonly \
      "${dataset}/${lang}/${adate}/" | \
  awk '{print $2}' | \
  grep -E "^http://cricca\.disi\.unitn\.it/datasets/${dataset}/" | \
  xargs -L1 -I{} wget -R '\?C=' {}

dat (experimental)

(coming soon)


This dataset has been processed with Python, see the wikidump project and the other repositories in the WikiLinkGraphs organization.


This dataset has been produced by:

  • Cristian Consonni – DISI, University of Trento, Trento, Italy.
  • David Laniado – Eurecat, Centre Tecnològic de Catalunya, Barcelona, Spain.
  • Alberto Montresor – DISI, University of Trento, Trento, Italy.

This dataset has been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.


This dataset is released under Creative Commons Attribution 4.0 International.

The original dump is released under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License, see the legal info.

How to cite

If you use this dataset please cite the main WikiLinkGraphs paper:

Consonni, Cristian, David Laniado, and Alberto Montresor. “WikiLinkGraphs: A complete, longitudinal and multi-language dataset of the Wikipedia link networks.”


What is the total size of the dataset, the number of files and the largest file in the dataset?

The dataset contains 9 files. For each of the 9 languages you will find a gzipped files. The total dataset size is 681M, divided among the languages like this:

  • 62M dewiki/
  • 353M enwiki/
  • 37M eswiki/
  • 47M frwiki/
  • 27M itwiki/
  • 21M nlwiki/
  • 17M plwiki/
  • 68M ruwiki/
  • 53M svwiki/

The average file size is 79.3GB and the largest file is ~353MGB (enwiki’s redirects).

How are files organized?

Files are divided in directories, one for each language, like this:

├── dewiki
│   └── 20180301
│       └── dewiki.redirects.20180301.csv.gz
├── enwiki
│   └── 20180301
│       └── enwiki.redirects.20180301.csv.gz
├── eswiki
│   └── 20180301
│       └── eswiki.redirects.20180301.csv.gz
├── frwiki
│   └── 20180301
│       └── frwiki.redirects.20180301.csv.gz
├── itwiki
│   └── 20180301
│       └── itwiki.redirects.20180301.csv.gz
├── nlwiki
│   └── 20180301
│       └── nlwiki.redirects.20180301.csv.gz
├── plwiki
│   └── 20180301
│       └── plwiki.redirects.20180301.csv.gz
├── ruwiki
│   └── 20180301
│       └── ruwiki.redirects.20180301.csv.gz
└── svwiki
    └── 20180301
        └── svwiki.redirects.20180301.csv.gz

Who produced this dataset and why?

  • This dataset has been produced by Cristian Consonni, David Laniado and Alberto Montresor.
  • Cristian Consonni and Alberto Montresor are affiliated with the Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy; David is affiliated with Eurecat - Centre Tecnològic de Catalunya, Barcelona, Spain.
  • This dataset has also been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.


For further info send me an e-mail.