Dataset: WikiLinkGraphs' RevisionLists
This dataset contains lists of all revisions for each Wikipedia article (namespace 0
) from Wikimedia’s history dumps for the languages de, en, es, fr, it, nl, pl, ru, sv.
- rawwikilinks
- rawwikilinks-snapshots
- revisionlist (this one)
- snapshots
- redirects
- resolved-redirects
- wikilinkgraphs
Description
page_id
: an integer, the page identifier used by MediaWiki. This identifier is not necessarily progressive, there may be gaps in the enumeration;page_title
: a string, the title of the Wikipedia article;revision_id
: an integer, the identifier of a revision of the article, also called apermanent id , because it can be used to link to that specific revision of a Wikipedia article;revision_parent_id
: an integer, the identifier of the parent revision. In general, each revision as a unique parent; going back in time before 2002, however, we can see that the oldest articles present non-linear edit histories. This is a consequence of the import process from the software previously used to power Wikipedia, MoinMonWiki, to MediaWiki;revision_timestamp
: date and time of the edit that generated the revision under consideration;user_type
: a string ("registered"
or"anonymous"
), specifying whether the user making the revision was logged-in or not;user_username
: a string, the username of the user that made the edit that generated the revision under consideration;user_id
: an integer, the identifier of the user that made the edit that generated the revision under consideration;revision_minor
: a boolean flag, with value 1 if the edit that generated the current revision was marked asminor by the user, 0 otherwise;bytes
: an integer, the length in bytes of the text of that revision;change_bytes
: an integer, the difference in length between a revision and the previous;
Sample
Extract of the file enwiki-20180301-pages-meta-history1.xml-p10p2115.7z.revisionlist.csv.gz
in enwiki/20180301/
:
Download
This dataset can be downloaded in two different ways:
HTTP (preferred method)
You can find the dataset on: cricca.disi.unitn.it/datasets/wikilinkgraphs-revisionlist
.
You can download the dataset with the following command:
dataset='wikilinkgraphs-revisionlist'; adate=20180301; \ langs=( 'dewiki' 'enwiki' 'eswiki' 'frwiki' 'itwiki' 'nlwiki' 'plwiki' 'ruwiki' 'svwiki' ); \ for lang in "${langs[@]}"; do lynx \ -dump \ -listonly \ "http://cricca.disi.unitn.it/datasets/${dataset}/${lang}/${adate}/" | \ awk '{print $2}' | \ grep -E "^http://cricca\.disi\.unitn\.it/datasets/${dataset}/" | \ xargs -L1 -I{} wget -R '\?C=' {} done
dat (experimental)
You can download the dataset using dat
, the dataset is available at https://datbase.org/CristianCantoro/wikilinkgraphs-revisionlist.
Once you have installed dat
, you can download the dataset with:
dat clone dat://40b516f8a05d8c207d427a290a76916605ba31a60d7e0aa73d090c1decd9fcdc ~/wikilinkgraphs-revisionlist
Code
This dataset has been processed with Python, see the wikidump
project and the other repositories in the WikiLinkGraphs organization.
Authors
This dataset has been produced by:
- Cristian Consonni – DISI, University of Trento, Trento, Italy.
- David Laniado – Eurecat, Centre Tecnològic de Catalunya, Barcelona, Spain.
- Alberto Montresor – DISI, University of Trento, Trento, Italy.
This dataset has been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.
License
This dataset is released under Creative Commons Attribution 4.0 International.
The original dump is released under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License, see the legal info.
How to cite
If you use this dataset please cite the main WikiLinkGraphs paper:
Consonni, Cristian, David Laniado, and Alberto Montresor. “WikiLinkGraphs: A complete, longitudinal and multi-language dataset of the Wikipedia link networks.”
FAQs
What is the total size of the dataset, the number of files and the largest file in the dataset?
The total dataset size is 21GB, and it contains 987 files. The average file size is 21.8MB and the largest file is 588MB.
How are files organized?
Files are divided in directories, one for each language, like this:
Who produced this dataset and why?
- This dataset has been produced by Cristian Consonni, David Laniado and Alberto Montresor.
- Cristian Consonni and Alberto Montresor are affiliated with the Department of Information Engineering and Computer Science (DISI), University of Trento, Trento, Italy; David is affiliated with Eurecat - Centre Tecnològic de Catalunya, Barcelona, Spain.
- This dataset has also been produced as part of the research related to the ENGINEROOM project. EU ENGINEROOM has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement no 780643.
Questions?
For further info send me an e-mail.