Wide crawls of the Internet conducted by Internet Archive. Please visit the Wayback Machine to explore archived web sites. Since September 10th, 2010, the Internet Archive has been running Worldwide Web Crawls of the global web, capturing web elements, pages, sites and parts of sites. Each Worldwide Web Crawl was initiated from one or more lists of URLs that are known as "Seed Lists". Descriptions of the Seed Lists associated with each crawl may be provided as part of the metadata for...
Content crawled via the Wayback Machine Live Proxy mostly by the Save Page Now feature on web.archive.org. Liveweb proxy is a component of Internet Archive’s wayback machine project. The liveweb proxy captures the content of a web page in real time, archives it into a ARC or WARC file and returns the ARC/WARC record back to the wayback machine to process. The recorded ARC/WARC file becomes part of the wayback machine in due course of time.
Survey crawls are run about twice a year, on average, and attempt to capture the content of the front page of every web host ever seen by the Internet Archive since 1996.
Topic: survey crawls
The seed for Wide00014 was: - Slash pages from every domain on the web: -- a list of domains using Survey crawl seeds -- a list of domains using Wide00012 web graph -- a list of domains using Wide00013 web graph - Top ranked pages (up to a max of 100) from every linked-to domain using the Wide00012 inter-domain navigational link graph -- a ranking of all URLs that have more than one incoming inter-domain link (rank was determined by number of incoming links using Wide00012 inter domain links)...
Wide17 was seeded with the "Total Domains" list of 256,796,456 URLs provided by Domains Index on June 26th, and crawled with max-hops set to "3" and de-duplication set "on".
Web wide crawl number 16 The seed list for Wide00016 was made from the join of the top 1 million domains from CISCO and the top 1 million domains from Alexa.
A daily crawl of more than 200,000 home pages of news sites, including the pages linked from those home pages. Site list provided by The GDELT Project
Topics: GDELT, News
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from June 2014.
Web wide crawl with initial seedlist and crawler configuration from January 2015.
The seeds for this crawl came from: 251 million Domains that had at least one link from a different domain in the Wayback Machine, across all time ~ 300 million Domains that we had in the Wayback, across all time 55,945,067 Domains from https://archive.org/details/wide00016 This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from April 2013.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Crawl of outlinks from wikipedia.org started March, 2016. These files are currently not publicly accessible. Properties of this collection. It has been several years since the last time we did this. For this collection, several things were done: 1. Turned off duplicate detection. This collection will be complete, as there is a good chance we will share the data, and sharing data with pointers to random other collections, is a complex problem. 2. For the first time, did all the different wikis....
Wayback indexes. This data is currently not publicly accessible.
This "Survey" crawl was started on Feb. 24, 2018. This crawl was run with a Heritrix setting of "maxHops=0" (URLs including their embeds) Survey 7 is based on a seed list of 339,249,218 URLs which is all the URLs in the Wayback Machine that we saw a 200 response code from in 2017 based on a query we ran on Feb. 1st, 2018. The WARC files associated with this crawl are not currently available to the general public.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from August 2013.
Web wide crawl with initial seedlist and crawler configuration from January 2012 using HQ software.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from April 2012.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from February 2014.
Screen captures of hosts discovered during wide crawls. This data is currently not publicly accessible.
The seed for this crawl was a list of every host in the Wayback Machine This crawl was run at a level 1 (URLs including their embeds, plus the URLs of all outbound links including their embeds) The WARC files associated with this crawl are not currently available to the general public.
Web wide crawl with initial seedlist and crawler configuration from October 2010
Survey crawl of .com domains started January 2011.
Topic: webcrawl
Web wide crawl with initial seedlist and crawler configuration from March 2011 using HQ software.
Wide crawls of the Internet conducted by Internet Archive. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Web wide crawl with initial seedlist and crawler configuration from September 2012.
Crawls of International News Sites
Web wide crawl with initial seedlist and crawler configuration from March 2011. This uses the new HQ software for distributed crawling by Kenji Nagahashi. What’s in the data set: Crawl start date: 09 March, 2011 Crawl end date: 23 December, 2011 Number of captures: 2,713,676,341 Number of unique URLs: 2,273,840,159 Number of hosts: 29,032,069 The seed list for this crawl was a list of Alexa’s top 1 million web sites, retrieved close to the crawl start date. We used Heritrix (3.1.1-SNAPSHOT)...
This collection includes web crawls of the Federal Executive, Legislative, and Judicial branches of government performed at the end of US presidential terms of office.
Topics: web, end of term, US, federal government
Data crawled by Sloan Foundation on behalf of Internet Archive
Crawl of outlinks from wikipedia.org started February, 2012. These files are currently not publicly accessible.
Miscellaneous high-value news sites
Topics: World news, US news, news
Hacker News Crawl of their links.
Captures of pages from YouTube. Currently these are discovered by searching for YouTube links on Twitter.
Topics: YouTube, Twitter, Video
Shallow crawls that collect content 1 level deep including embeds. This data is currently not publicly accessible.
Crawl of outlinks from wikipedia.org started May, 2011. These files are currently not publicly accessible.
Geocities crawl performed by Internet Archive. This data is currently not publicly accessible. from Wikipedia : Yahoo! GeoCities is a Web hosting service. GeoCities was originally founded by David Bohnett and John Rezner in late 1994 as Beverly Hills Internet (BHI), and by 1999 GeoCities was the third-most visited Web site on the World Wide Web. In its original form, site users selected a "city" in which to place their Web pages. The "cities" were metonymously named after...
50M
50M
web
eye 50M
favorite 1
comment 0
CDX Index shards for the Wayback Machine. The Wayback Machine works by looking for historic URL's based on a query. This is done by searching an index of all the web objects (pages, images, etc) that have been archived over the years. This collection holds the index used for this purpose, which is broken up into 300 pieces so they fit into items more naturally and distribute the lookup load. Each of these 300 pieces is stored in at least 2 items, and then those are also stored on the backup...
This collection contains web crawls performed as part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Fall and Winter of 2016 and Spring of 2017. For more information, see...
Topics: end of term, federal government, 2016, president, congress, government data
Crawl of outlinks from wikipedia.org started July, 2011. These files are currently not publicly accessible.
This collection contains web crawls performed on the US Federal Executive, Legislative & Judicial branches of government in 2020-2021. Information about this project can be found here: https://end-of-term.github.io/eotarchive/ You can submit URLs to be archived here: https://digital2.library.unt.edu/nomination/eth2020/add/
COM survey crawl data collected by Internet Archive in 2009-2010. This data is currently not publicly accessible.
Internet Archive crawldata from Survey Webwide Crawl, captured by crawl835.us.archive.org:survey from Fri Jun 9 20:38:43 PDT 2017 to Sat Jun 10 02:19:04 PDT 2017.
Topic: crawldata
1.7M
1.7M
web
eye 1.7M
favorite 0
comment 0
1.7M
1.7M
web
eye 1.7M
favorite 0
comment 0
1.6M
1.6M
Apr 9, 2016
04/16
by
Internet Archive
web
eye 1.6M
favorite 0
comment 0
Internet Archive crawldata from Webwide Crawl, captured by crawl830.us.archive.org:widewebcap from Mon Mar 21 16:55:53 PDT 2016 to Fri Apr 8 20:48:18 PDT 2016.
Topic: crawldata
1.6M
1.6M
web
eye 1.6M
favorite 0
comment 0
1.5M
1.5M
web
eye 1.5M
favorite 0
comment 0
1.6M
1.6M
web
eye 1.6M
favorite 0
comment 0
Shallow crawl started 2013 that collects content 1 level deep, including embeds. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Shallow crawl started 2013 that collects content 1 level deep, including embeds. Access to content is restricted. Please visit the Wayback Machine to explore archived web sites.
Survey crawl of .net domains started December 2010.
Topic: webcrawl
1.3M
1.3M
web
eye 1.3M
favorite 0
comment 0
This collection contains web crawls performed as the pre-inauguration crawl for part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Fall and Winter of 2016 to capture websites prior to the January...
Topics: end of term, federal government, 2016, president, congress
982,128
982K
web
eye 982,128
favorite 0
comment 0
1.1M
1.1M
web
eye 1.1M
favorite 0
comment 0
906,187
906K
web
eye 906,187
favorite 0
comment 0
897,091
897K
web
eye 897,091
favorite 0
comment 0
941,569
942K
web
eye 941,569
favorite 0
comment 0
944,671
945K
web
eye 944,671
favorite 0
comment 0
This collection contains web crawls performed as the post-inauguration crawl for part of the End of Term Web Archive, a collaborative project that aims to preserve the U.S. federal government web presence at each change of administration. Content includes publicly-accessible government websites hosted on .gov, .mil, and relevant non-.gov domains, as well as government social media materials. The web archiving was performed in the Winter of 2016 and Spring of 2017 to capture websites...
Topics: end of term, federal government, 2016, president, congress
918,947
919K
web
eye 918,947
favorite 0
comment 0
Internet Archive crawldata from wikipedia outbound links. captured by crawl435.us.archive.org:wpo from Sat Apr 28 06:47:19 PDT 2012 to Sat Apr 28 01:26:02 PDT 2012.
Topic: crawldata