Skip to main content

Wikipedia:Link rot







Page semi-protected


Wikipedia:Link rot


From Wikipedia, the free encyclopedia

Jump to navigation
Jump to search










Like most large websites, Wikipedia suffers from the phenomenon known as link rot, where external links, often used as references and citations, gradually become irrelevant or broken (also called a dead link), as the linked websites disappear, change their content, or move. This presents a significant threat to Wikipedia's reliability policy and its source citation guideline.


The effort required to prevent link rot is significantly less than the effort required to repair or mitigate a rotten link. Therefore, prevention of link rot strengthens the encyclopedia. This guide provides strategies for preventing link rot before it happens. These include the use of web archiving services and the judicious use of citation templates.


Editors are encouraged to add an archive link as a part of each citation, or at least submit the referenced URL for archiving,[note 1] at the same time that a citation is created or updated.


However, link rot cannot always be prevented, so this guide also explains how to mitigate link rot by finding previously archived links and other sources. These strategies should be implemented in accordance with Wikipedia:Citing sources#Preventing and repairing dead links, which describes the steps to take when a link cannot be repaired.


Except for URLs in the External links section that have not been used to support any article content, do not delete cited information solely because the URL to the source does not work any longer. Recovery and repair options and tools are available. Verifiability does not require that all information be supported by a working link, nor does it require the source to be published online.




Contents





  • 1 Preventing link rot

    • 1.1 Web archive services

      • 1.1.1 Robots.txt



    • 1.2 Alternative methods



  • 2 Repairing a dead link

    • 2.1 Searching


    • 2.2 Internet archives



  • 3 Mitigating a dead link


  • 4 Keeping dead links


  • 5 Automated tools


  • 6 Link rot on non-Wikimedia sites


  • 7 See also

    • 7.1 Bots



  • 8 Notes


  • 9 External links



Preventing link rot





Shortcut

  • WP:PLRT

As you write articles, you can help prevent link rot in several ways. The first way to prevent link rot is to avoid bare URLs by recording as much of the exact title, author, publisher and date of the source as possible. Optionally, also add the accessdate. If the link goes bad, this added information can help a future Wikipedian, either editor or reader, locate a new source for the original text, either online or a print copy. This may be impossible with only an isolated, bare URL that no longer works. Local and school libraries are a good resource for locating such offline sources. Many local libraries have in-house subscriptions to digital databases or inter-library loan agreements, making it easier to retrieve hard-to-find sources.


As you edit, if an article has bare URLs in its citations, fix them or at least tag the References section with linkrot as a reminder to complete citation details as above, and to categorize the article as needing cleanup.


Web archive services



A second way to prevent link rot is to use a web archiving service. The two most popular services are the Wayback Machine, which crawls and archives many web pages as well as having a form to suggest a URL to be archived,[note 1] and WebCite, which provides on-demand web archiving. These services collect and preserve web pages for future use even if the original web page is moved, changed, deleted, or placed behind a pay wall. Web archiving is especially important when citing web pages that are unstable or prone to changes, like time sensitive news articles or pages hosted by financially distressed organizations. Once you have the URL for the archived version of the web page, use the archiveurl= and archivedate= parameters in the citation template that you are using. The template will automatically incorporate the archived link into reference.



  • Dubner, Stephen J. (January 24, 2008). "Wall Street Journal Paywall Sturdier Than Suspected". The New York Times Company. Retrieved 2009-10-28..mw-parser-output cite.citationfont-style:inherit.mw-parser-output qquotes:"""""""'""'".mw-parser-output code.cs1-codecolor:inherit;background:inherit;border:inherit;padding:inherit.mw-parser-output .cs1-lock-free abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-subscription abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registrationcolor:#555.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration spanborder-bottom:1px dotted;cursor:help.mw-parser-output .cs1-hidden-errordisplay:none;font-size:100%.mw-parser-output .cs1-visible-errorfont-size:100%.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-formatfont-size:95%.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-leftpadding-left:0.2em.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-rightpadding-right:0.2em


  • Dubner, Stephen J. (January 24, 2008). "Wall Street Journal Paywall Sturdier Than Suspected". The New York Times Company. Archived from the original on 2011-08-15.

However, not every web page can be archived. Webmasters and publishers may use a Robots exclusion standard in their domain to disallow archiving, or rely on complicated JavaScript, Flash, or other code that is not easily copied. In these cases, alternate methods of preserving the data may be available.


Robots.txt


A quirk in the way the Wayback Machine operates means archived copies of sites sometimes become unavailable, for example, the Freakonomics blog previously hosted at freakonomics.blogs.nytimes.com. Those URLs were later excluded from archiving by the New York Times' robots.txt file; this also made the previously archived content unavailable. robots.txt changes, however, can unhide that which previous changes have hidden, so do not delete an archiveURL solely because the archived content is currently unavailable. Luckily, in this case, not only can the content be found on a new site that is still open to archiving, but the site's robots.txt later changed to allow archiving again, and so the old archives are now unhidden (example).


Alternative methods


Most citation templates have a quote= parameter that can be used to store text quotes of the source material. This can be used to store a limited amount of text from the source within the citation template. This is especially useful for sources that cannot be archived with web archiving services. It can also provide insurance against failure of the chosen web archiving service.



  • Dubner, Stephen J. (January 24, 2008). "Wall Street Journal Paywall Sturdier Than Suspected". The New York Times Company. Archived from the original on 2008-04-30. ...the Wall Street Journal will not, as has been widely speculated, tear down its paywall entirely...

When using the quote parameter, choose the most succinct and relevant material possible that preserves the context of the reference. Storing the entire text of the source is not appropriate under fair use policies, so choose only the most important portions of the text that most support the assertions in the Wikipedia article.


A quote also helps searching for other on-line versions of the source in the event that the original is discontinued.


Where applicable, public domain materials can be copied to Wikisource.


Repairing a dead link





Shortcut

  • WP:DEADLINK


There are several ways to try to repair a dead link, detailed below:


Searching


If the dead link includes enough information (article title, names, etc.) it is often possible to use it to find the Web page at a different location, either on the same site or elsewhere.


Search the site

Often web pages have simply moved, either in connection with a migration to a new server, or through general site maintenance. A site index or site-specific search feature is a useful place to locate the moved page. If these tools are not available, many Internet search engines allow a search on a specified site.


Search the Internet

A search engine query using the title of the page, possibly with a search restriction to the same site, might find the page. Using the examples from above, a web search (such as Google, Yahoo, etc.) might look like one of these:


site:freakonomics.blogs.nytimes.com/ "Wall Street Journal Paywall Sturdier Than Suspected"

site:nytimes.com/ "Wall Street Journal Paywall Sturdier Than Suspected"

"Wall Street Journal Paywall Sturdier Than Suspected"

Also, a search for some components of the dead link with punctuation removed is often fruitful; e.g. a search through Google for


groups.csail.mit.edu sFFT paper pdf

leads to a page enabling this fix. A search for an unusual or unique-looking substring of the URL, such as just the filename at the end, is often fruitful.


Internet archives


Check for archived versions of the page in the archiving services. If you find an archived version of the dead link, double-check to make sure that citation still supports the article text. It is also a good idea to consult the access date of the citation (if it was specified, or a history search for when it was added) to see how contemporaneous this archived version is to the link when it was cited.


The following archiving services are considered to be reliable:



  • Wayback Machine at https://archive.org/web/


  • WebCite at https://www.webcitation.org/query


  • UK Government Web Archive at http://webarchive.nationalarchives.gov.uk/

The Mementos interface allows you to search multiple archiving services for archived versions of some pages with a single request using the Memento protocol. Unfortunately, the Mementos webpage interface removes any parameters which are included with the URL. If the URL contains a "?" it is unlikely to work properly when entered manually without changes. When entering the URL into the Mementos interface manually, the most common change needed is to change "?" to "%3F". While making only this change will not be sufficient in all cases, it will work most of the time. The bookmarklet in the table below will properly encode URLs such that searches will work. Mementos looks like it is, or at least will be, very convenient. However, if archives are not found at Mementos, it should not be the only site checked. Mementos can sometimes return no results when archives exist at sites which it normally includes. An example of this is trying to find archives of Battle of the Atlantic. As of April 2014[update], Archive.org reports it has 63 or 64 archives (https, http). Mementos reports 0 archives (https, http). Mementos usually finds archives at Archive.org, but sometimes Mementos does not even when archives exist. If you try Mementos first, don't assume that there really are no archives if Mementos reports that there are none.


There are many Internet archive projects in existence.


When multiple archive dates are available, try to use the one that is most likely to be the contents of the page seen by the editor who entered the reference on the |accessdate=. If that parameter is not specified, a search of the article's revision history can be performed to determine when the link was added to the article.


View the archive to verify that it contains valid page information. Sometimes archives are actually archives of the fact that the link is dead, or that the archiving failed. If this is the case, try using an archive from a different date. Usually dates closer to the time the link was placed in the Wikipedia page, or earlier, are more likely to show valid information. Different archiving sites should also be tried.


If an archived version of a page is found for which the dead link supplied little information, the additional information may be enough, with a little extra work, to find a live copy. For example, the archived version of a dead bare link may provide title and author, allowing a live version to be found. An actual example: the dead link [http://www.vangoghmuseum.nl/vgm/index.jsp?page=2122&lang=en Van Gogh Museum, Amsterdam] leads to https://wayback.archive.org/web/20140323172316/http://www.vangoghmuseum.nl/vgm/index.jsp?page=2122&lang=en, which gives the title ''The Courtesan (after Eisen), 1887''; a search on the www.vangoghmuseum.nl site finds a live link.


For most citation templates, archives are entered using the required |archiveurl=, |archivedate= and optional |deadurl= parameters. The primary link is automatically switched to the archive unless |deadurl=no; the |deadurl= parameter can simply be omitted. To pre-emptively supply an archived version of a URL that may later go dead, |deadurl=yes (or y, or true) will change the display order, with the title retaining the original link and the archive linked at the end. When the original URL has been usurped for the purposes of spam, advertising, or is otherwise unsuitable, setting |deadurl=unfit or |deadurl=usurped suppresses display of the original URL (but |url= is still required).
















Bookmarklets to check common archive sites for archives of the current page
(all open in a new tab or window)

Archive siteBookmarklet
Archive.org
javascript:void(window.open('https://web.archive.org/web/*/'+location.href))

UKGWA
javascript:void(window.open('http://webarchive.nationalarchives.gov.uk/*/'+location.href))

WebCite
javascript:void(window.open('https://www.webcitation.org/query.php?url='+location.href))

Wikiwix
javascript:void(window.open('http://archive.wikiwix.com/cache/?url='+location.href))

Mementos interface
javascript:void(window.open('https://www.webarchive.org.uk/mementos/search/'+encodeURIComponent(location.href)+'?referrer='+encodeURIComponent(document.referrer)))





  • archive.is Following WP:Archive.is RFC and WP:Archive.is RFC 3 which had blacklisted it, the archive.is service was allowed back after WP:Archive.is RFC 4.

Mitigating a dead link





Shortcut

  • WP:MDLI

At times, all attempts to repair the link will be unsuccessful. In that event, consider finding an alternate source so that the loss of the original does not harm the verifiability of the article. Alternate sources about broad topics are usually easily located. A simple search engine query might locate an appropriate alternative, but be extremely careful to avoid citing mirrors and forks of Wikipedia itself, which would violate Wikipedia:Verifiability.


Sometimes, finding an appropriate source is not possible, or would require more extensive research techniques, such as a visit to a library or the use of a subscription-based database. If that is the case, consider consulting with Wikipedia editors at Wikipedia:WikiProject Resource Exchange, the Wikipedia:Village pump, or Wikipedia:Help desk. Also, consider contacting experts or other interested editors at a relevant WikiProject.


Keeping dead links





Shortcut

  • WP:KDL

A dead, unarchived source URL may still be useful. Such a link indicates that information was (probably) verifiable in the past, and the link might provide another user with greater resources or expertise with enough information to find the reference. It could also return from the dead. With a dead link, it is possible to determine if it has been cited elsewhere, or to contact the person originally responsible for the source. For example, one could contact the Yale Computer Science department if http://www.cs.yale.edu/~EliYale/Defense-in-Depth-PhD-thesis.pdf[dead link] were dead. Place dead link after the dead URL and just before the </ref> tag if applicable, leaving the original link intact.
Placing dead link auto-categorizes the article into Articles with dead external links project category, and into specific monthly date range category based on |date= parameter. Do not delete a URL just because it has been tagged with dead link for a long time.


Automated tools


There have been at least 6 bots over the years that proactively and automatically archive external URLs. As of April 2017 two bots are operational. The primary bot is InternetArchiveBot (operator Cyberpower678) which can also be run on individual pages by anyone: click on a page "History" tab and find the link for "Fix dead links". The other bot is WaybackMedic (operator Green Cardamom) which primarily checks for link rot among the archive links themselves, plus fixes various other problems related to archives.


LinkChecker is an open-source tool that can scan for broken links on any website, including Wikipedia.


Link rot on non-Wikimedia sites





Shortcut

  • WP:EXTERNALROT

Non-Wikimedia sites are also susceptible to link rot. Following a page move or page deletion, links to Wikipedia pages from other websites may break. In most page moves, a redirect will remain at the old page—this won't cause a problem. But if a page is completely deleted or usurped (i.e. replaced with other content) then link rot will have been caused on any external websites that link to it.


Replacement of page content with a disambiguation page may still cause link rot, but is less harmful because a disambiguation page is essentially a type of soft redirect that will lead the reader to the required content. If a page is usurped with content for another subject that shares its name, a hatnote may be placed at the top that directs readers to the original content on its new page—this again is a type of soft redirect, but less obvious. In these cases, readers arriving from an external rotten link should be able to find what they're looking for, but the situation is best avoided as they would have to get there via an additional page, potentially giving a poor impression of both Wikipedia and the linking website.


Because the Wikipedia software does not store Referer information, it will be impossible to tell how many external web pages will be affected by a move or deletion, but the risk of link rot will probably be greatest on older and higher profile pages. In truth, there is not a lot that can be done; maintenance of non-Wikimedia websites is not within the scope of being a Wikimedian, nor in most cases within our capability (although if they can be fixed, it would be helpful to do so). However, it may be good practice to think about the potential impact on other sites when deleting or moving Wikipedia pages, especially if no redirect or hatnote will remain. If a move or deletion is expected to cause significant damage, then this might be a factor to consider in WP:RM, WP:AFD and WP:RFD discussions, although other factors may carry more weight.


See also


  • Help:Archiving a source


  • Category:Articles with bare URLs for citations—the backlog of articles containing bare URLs at risk of link rot, sub-categorised by month


  • Category:Articles with dead external links—the backlog of articles containing dead links, sub-categorised by month


  • Help:Using the Wayback Machine—how-to guide

  • List of HTTP status codes


  • Special:LinkSearch—to find all the pages that contain a particular URL


  • Wikipedia:Citing sources/Further considerations#Pre-emptive archiving—brief guide on how to use various archiving services

  • Wikipedia:Citing sources#Preventing and repairing dead links


  • Wikipedia:External links#Longevity of links—prescribes removal of dead URLs from the "External links" section


  • Wikipedia:Offline sources—essay


  • Wikipedia:Using WebCite—how-to guide


  • WikiProject External links—dedicated to cleaning up overly long lists of external links and having articles conform to Wikipedia's external links guidelines

Bots



  • Wikipedia:Bot requests—general bot requests, e.g., concerning mass link replacements


  • User:InternetArchiveBot (IABot)—automatically fixes dead links whenever possible, and tags them when it isn't


  • User:Legobot—can mass tag links with dead link. Requests can be made at User talk:Legoktm.


  • WP:STiki/Dead links—Page reporting NEWLY added dead links, a component of the STiki project


  • WP:WAYBACKMEDIC (WaybackMedic)-automatically fixes dead links that are difficult to determine, other general fixes

Notes




  1. ^ ab Using the web form at https://archive.org/web/, enter a URL in the box in the bottom right of the page below "Save Page Now" and click "SAVE PAGE". This should then archive the webpage and show the archived version of the page. If archiving is attempted and ultimately successful, the archived copy usually becomes available within minutes.

    Alternately, you can use the bookmarklets listed at Wikipedia:Citing sources/Further considerations#Archiving bookmarklets. The bookmarklets enable you to cause the page that you are viewing to be archived with a single click. A new tab will open with the progress of the archiving without disturbing the tab you are using to view the to-be-archived page. Bookmarklets are available for both Archive.org (the Wayback Machine) and WebCite.




External links



  • weblinkchecker.py—script from the Python Wikipedia Bot collection which finds broken external links.


  • UndeadLinks.org—allows you to search for a broken link's new address.[dead link]


  • Resurrect Pages—add-on for Firefox, provides links to seven cache/archive websites upon coming across a dead link.


  • 404-Error?—add-on for Firefox, automatically brings you to the archive.org version upon coming across a dead link.


  • PageHistory—addon for Safari.


  • Webcache—add-on for Opera.


  • Web Cache—add-on for Chrome.

  • Internet Archive









Retrieved from "https://en.wikipedia.org/w/index.php?title=Wikipedia:Link_rot&oldid=872837270"





Navigation menu

























(window.RLQ=window.RLQ||).push(function()mw.config.set("wgPageParseReport":"limitreport":"cputime":"0.468","walltime":"0.614","ppvisitednodes":"value":1548,"limit":1000000,"ppgeneratednodes":"value":0,"limit":1500000,"postexpandincludesize":"value":175898,"limit":2097152,"templateargumentsize":"value":1733,"limit":2097152,"expansiondepth":"value":11,"limit":40,"expensivefunctioncount":"value":13,"limit":500,"unstrip-depth":"value":0,"limit":20,"unstrip-size":"value":12553,"limit":5000000,"entityaccesscount":"value":0,"limit":400,"timingprofile":["100.00% 385.830 1 -total"," 18.86% 72.787 3 Template:Cite_web"," 13.66% 52.706 1 Template:Wikipedia_how_to"," 13.52% 52.171 1 Template:Wikipedia_essays"," 13.50% 52.081 1 Template:Pp-semi-indef"," 12.44% 48.004 1 Template:Navbox_with_collapsible_groups"," 12.43% 47.969 6 Template:Shortcut"," 11.13% 42.942 1 Template:Ombox"," 8.99% 34.689 2 Template:Dead_link"," 7.20% 27.784 2 Template:Fix"],"scribunto":"limitreport-timeusage":"value":"0.170","limit":"10.000","limitreport-memusage":"value":4748122,"limit":52428800,"cachereport":"origin":"mw1253","timestamp":"20181221162524","ttl":1900800,"transientcontent":false);mw.config.set("wgBackendResponseTime":739,"wgHostname":"mw1253"););

Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế