Lfs Free S3 Account -

| Method | Pros | Cons | |--------|------|------| | GitHub Releases (via gh release upload ) | 1 GB per file, unlimited downloads | Not S3 API, requires Git LFS for >1 GB | | Google Drive + gdrive CLI | 15 GB free | No S3 compatibility, rate limits | | Local NAS + Tailscale | Unlimited, private | Requires own hardware |

#!/bin/bash # lfs-fetch-to-s3.sh BUCKET="lfs-builder" SOURCE_URL="https://www.linuxfromscratch.org/lfs/view/stable/wget-list" wget -q -O - $SOURCE_URL | while read url; do filename=$(basename "$url") echo "Uploading $filename to s3://$BUCKET/sources/" wget -q -O - "$url" | aws s3 cp - s3://$BUCKET/sources/$filename --endpoint-url https://<account_id>.r2.cloudflarestorage.com done 4.4 Configuring LFS to Use S3 as a Source Mirror Modify the LFS environment to fetch from S3 if local file is missing: lfs free s3 account

# On host (not inside LFS chroot) sudo apt install awscli # Debian/Ubuntu pip install awscli --upgrade aws configure Access Key ID: <your_r2_key> Secret Access Key: <your_r2_secret> Default region: auto Default output format: json For R2, set custom endpoint in ~/.aws/config: [profile r2] endpoint_url = https://<account_id>.r2.cloudflarestorage.com 4.3 Downloading LFS Sources to S3 Directly Instead of downloading sources to local disk first, fetch them and pipe directly to S3 (saving local storage): | Method | Pros | Cons | |--------|------|------|

# Inside LFS chroot, create a helper script /usr/local/bin/lfs-fetch #!/bin/bash # Usage: lfs-fetch <filename> <url_fallback> if aws s3 ls s3://lfs-builder/sources/$1 --endpoint-url $S3_ENDPOINT 2>/dev/null; then aws s3 cp s3://lfs-builder/sources/$1 . --endpoint-url $S3_ENDPOINT else wget $2 # Optional: upload to S3 for next time aws s3 cp $1 s3://lfs-builder/sources/ --endpoint-url $S3_ENDPOINT fi Then, in each package build script, replace wget with lfs-fetch . After each chapter, upload logs: A 500 MB tarball can be stored in S3:

tar -czf tools-chapter5.tar.gz -C /mnt/lfs tools aws s3 cp tools-chapter5.tar.gz s3://lfs-builder/tools/ | Operation | Cloudflare R2 (free) | Backblaze B2 (free) | |-----------|----------------------|----------------------| | 5 GB source storage | Covered (10 GB) | Covered (10 GB) | | 1000 downloads of 5 MB each (log files) | Free egress | 5 GB egress → 5 days free quota | | 50 uploads of 10 MB | Free (class A ops) | 500 ops free/day | | Monthly cost for typical LFS builder (2 builds/month) | $0 | $0 (if egress < 30 GB/month) | | Restoring tools from S3 (500 MB) | $0 | 0.5 GB egress → within free tier |

| Provider | Free Tier | S3-Compatible | Egress Limits | Best For | |----------|-----------|---------------|---------------|-----------| | | 10 GB storage, 1 GB/day egress | Yes (via S3 API) | 1 GB/day free | Source tarballs, logs | | IDrive e2 | 10 GB storage, 10 GB egress (one-time) | Yes | After trial: pay | Short-term builds | | Wasabi | No perpetual free tier (has 30-day trial) | Yes | None (but paid) | Not recommended for free | | Cloudflare R2 | 10 GB storage, no egress fees | Yes | Free egress | Ideal for LFS |

# Run inside LFS chroot or on host tar -czf logs-chapter5.tar.gz /mnt/lfs/sources/*/config.log /mnt/lfs/build.log aws s3 cp logs-chapter5.tar.gz s3://lfs-builder/logs/ The temporary tools ( /mnt/lfs/tools ) are critical. A 500 MB tarball can be stored in S3: