Dataset: WikiLinkGraphs' Snapshots
This dataset contains wikilink snapshots, i.e. links between Wikipedia articles, extracted by processing each revision of each Wikipedia article (namespace 0
) from Wikimedia’s history dumps for the languages de, en, es, fr, it, nl, pl, ru, sv. The snapshots were taken on March 1st, for the years between 2001 and 2018 (included).
- rawwikilinks
- rawwikilinks-snapshots
- revisionlist
- snapshots (this one)
- redirects
- resolved-redirects
- wikilinkgraphs
Description
page_id
: an integer, the page identifier used by MediaWiki. This identifier is not necessarily progressive, there may be gaps in the enumeration;page_title
: a string, the title of the Wikipedia article;revision_id
: an integer, the identifier of a revision of the article, also called apermanent id , because it can be used to link to that specific revision of a Wikipedia article;revision_parent_id
: an integer, the identifier of the parent revision. In general, each revision as a unique parent; going back in time before 2002, however, we can see that the oldest articles present non-linear edit histories. This is a consequence of the import process from the software previously used to power Wikipedia, MoinMonWiki, to MediaWiki;revision_timestamp
: date and time of the edit that generated the revision under consideration;
Sample
Extract of the file enwiki.link_snapshot.2018-03-01.csv.gz
in enwiki/20180301/
:
Download
This dataset can be downloaded in two different ways:
HTTP (preferred method)
You can find the dataset on: cricca.disi.unitn.it/datasets/wikilinkgraphs-snapshots
.
You can download the dataset with the following command:
dataset='wikilinkgraphs-snapshots'; adate=20180301; \ langs=( 'dewiki' 'enwiki' 'eswiki' 'frwiki' 'itwiki' 'nlwiki' 'plwiki' 'ruwiki' 'svwiki' ); \ for lang in "${langs[@]}"; do lynx \ -dump \ -listonly \ "http://cricca.disi.unitn.it/datasets/${dataset}/${lang}/${adate}/" | \ awk '{print $2}' | \ grep -E "^http://cricca\.disi\.unitn\.it/datasets/${dataset}/" | \ xargs -L1 -I{} wget -R '\?C=' {} done
dat (experimental)
(coming soon)
Code
This dataset has been processed with Python, see the wikidump
project and the other repositories in the WikiLinkGraphs organization.
Authors
This dataset has been produced by:
- Cristian Consonni – DISI, University of Trento, Trento, Italy.
- David Laniado – Eurecat, Centre Tecnològic de Catalunya, Barcelona, Spain.
- Alberto Montresor – DISI, University of Trento, Trento, Italy.
This dataset has been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.
License
This dataset is released under Creative Commons Attribution 4.0 International.
The original dump is released under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License, see the legal info.
How to cite
If you use this dataset please cite the main WikiLinkGraphs paper:
Consonni, Cristian, David Laniado, and Alberto Montresor. “WikiLinkGraphs: A complete, longitudinal and multi-language dataset of the Wikipedia link networks.”
FAQs
What is the total size of the dataset, the number of files and the largest file in the dataset?
For each of the 9 languages you will find 18 gzipped files, one for each snapshot from 2001 to 2018 (included). The total dataset size is 79TB, divided among the languages like this:
- 11G dewiki/
- 29G enwiki/
- 5.9G eswiki/
- 9.2G frwiki/
- 5.7G itwiki/
- 4.1G nlwiki/
- 4.8G plwiki/
- 6.8G ruwiki/
- 3.9G svwiki/
The dataset contains 162 files. The average file size is 0.5GB and the largest file is ~3.8GB (enwiki’s snapshot from 2018-03-01).
How are files organized?
Files are divided in directories, one for each language, each directory contains 18 files, one for each year from 2001 to 2018 (included). Like this:
Who produced this dataset and why?
- This dataset has been produced by Cristian Consonni, David Laniado and Alberto Montresor.
- Cristian Consonni and Alberto Montresor are affiliated with the Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy; David is affiliated with Eurecat - Centre Tecnològic de Catalunya, Barcelona, Spain.
- This dataset has also been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.
Questions?
For further info send me an e-mail.