diff --git a/Makefile b/Makefile index a97cee0..9e89d08 100644 --- a/Makefile +++ b/Makefile @@ -57,8 +57,8 @@ download_collinfo: curl -O https://index.commoncrawl.org/collinfo.json CC-MAIN-2024-22.warc.paths.gz: - @echo "downloading the list from s3, requires s3 auth even though it is free" - @echo "note that this file should be in the repo" + @echo "downloading the list from S3 requires S3 auth (even though it is free)" + @echo "note that this file should already be in the repo" aws s3 ls s3://commoncrawl/cc-index/table/cc-main/warc/crawl=CC-MAIN-2024-22/subset=warc/ | awk '{print $$4}' | gzip -9 > CC-MAIN-2024-22.warc.paths.gz duck_local_files: diff --git a/README.md b/README.md index 2754c6a..17a0a5a 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ flowchart TD The goal of this whirlwind tour is to show you how a single webpage appears in all of these different places. That webpage is [https://an.wikipedia.org/wiki/Escopete](https://an.wikipedia.org/wiki/Escopete), which we crawled on the date 2024-05-18T01:58:10Z. On the way, we'll also explore the file formats we use and learn about some useful tools for interacting with our data! In the Whirlwind Tour, we will: + 1) explore the WARC, WET and WAT file formats used to store Common Crawl's data. 2) play with some useful Python packages for interacting with the data: [warcio](https://github.com/webrecorder/warcio), [cdxj-indexer](https://github.com/webrecorder/cdxj-indexer), [cdx_toolkit](https://github.com/cocrawler/cdx_toolkit), @@ -58,10 +59,6 @@ Next, let's install the necessary software for this tour: This command will print out a screen-full of output and install the Python packages in `requirements.txt` to your venv. -### Install and configure AWS-CLI - -We will use the AWS Command Line Interface (CLI) later in the tour to access the data stored in Common Crawl's S3 bucket. Instructions on how to install the AWS-CLI and configure your account are available on the [AWS website](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). - ## Task 1: Look at the crawl data Common Crawl's website includes a [Get Started](https://commoncrawl.org/get-started) guide which summarises different ways to access the data and the file formats. We can use the dropdown menu to access the links for downloading crawls over HTTP(S): @@ -179,7 +176,7 @@ The output has three sections, one each for the WARC, WET, and WAT. For each one ## Task 3: Index the WARC, WET, and WAT -The example WARC files we've been using are tiny and easy to work with. The real WARC files are around a gigabyte in size and contain about 30,000 webpages each. What's more, we have around 24 million of these files! To read all of them, we could iterate, but what if we wanted random access so we could read just one particular record? We do that with an index. +The example WARC files we've been using are tiny and easy to work with. The real WARC files are around a gigabyte in size and contain about 30,000 webpages each. What's more, we have around 24 million of these files! To read all of them, we could iterate, but what if we wanted random access so we could read just one particular record? We do that with an index. ```mermaid flowchart LR warc --> indexer --> cdxj & columnar @@ -344,7 +341,7 @@ python ./warcio-iterator.py testing.warc.gz Make sure you compress WARCs the right way! -## Task 6: Use cdx_toolkit to query the full CDX index and download those captures from AWS S3 +## Task 6: Use cdx_toolkit to query the full CDX index and download those captures Some of our users only want to download a small subset of the crawl. They want to run queries against an index, either the CDX index we just talked about, or in the columnar index, which we'll talk about later. @@ -404,7 +401,7 @@ Next, we use the `cdxt` command `warc` to retrieve the content and save it local Finally, we run `cdxj-indexer` on this new WARC to make a CDXJ index of it as in Task 3, and then iterate over the WARC using `warcio-iterator.py` as in Task 2. -## Task 7: Find the right part of the columnar index +## Task 7: Find the right part of the columnar index Now let's look at the columnar index, the other kind of index that Common Crawl makes available. This index is stored in parquet files so you can access it using SQL-based tools like AWS Athena and duckdb as well as through tables in your favorite table packages such as pandas, pyarrow, and polars. diff --git a/duck.py b/duck.py index af1c677..7b36a37 100644 --- a/duck.py +++ b/duck.py @@ -70,7 +70,6 @@ def get_files(algo, crawl): files = f'https://data.commoncrawl.org/cc-index/table/cc-main/warc/crawl={crawl}/subset=warc/*.parquet' raise NotImplementedError('duckdb will throw an error because it cannot glob this') elif algo == 'cloudfront': - prefix = f's3://commoncrawl/cc-index/table/cc-main/warc/crawl={crawl}/subset=warc/' external_prefix = f'https://data.commoncrawl.org/cc-index/table/cc-main/warc/crawl={crawl}/subset=warc/' file_file = f'{crawl}.warc.paths.gz'