Russell Brandom at The Verge has a piece on Common Crawl, “a non-profit foundation dedicated to providing an open repository of web crawl data that can be accessed and analyzed by everyone.” At one extreme, that dataset could be used to build your own local or targeted search engine; at a smaller scale, it could be a boon to data journalists:
For example, web crawl data can be used to spot trends and identify patterns in politics, economics, health, popular culture and many other aspects of life. It provides an immensely rich corpus for scientific research, technological advancement, and innovative new businesses. It is crucial for our information-based society that the web be openly accessible to anyone who desires to utilize it.
Be forewarned: If you think a Hadoop cluster is a kind of Easter candy, this isn’t the weekend hacking project for you. (Here’s an earlier piece from MIT Technology Review.)
Leave a comment