If you are looking for an explainer on S3 redirection rules, you are going to have a tough time finding a good one. Available on redirection rules is available from all over the place, no one source (including Amazon) is even close to complete, and then outcomes don’t always match exceptions. Welcome to cloud confusion.
So you have a blog that you no longer use but you don’t want to disable it. Instead, you want to turn it into a static website, hosted on AWS S3, that is nearly free to run and will never require an update or any upkeep. Easy enough with the aforementioned guide, right? You can even rebuild the directory structure in S3 so that some old articles will still work (like
MySite.com/2017/12/article.html), using S3 folders.
Rebuilding the directory is great for some hit articles, but what if you want to redirect all your old images to the home page or 301 all the articles from 2014 (
MySite.com/2014/*)? You need a some kind of routing rule… possibly a conditional Redirection Rule.
Time for some good news/bad news. The good? S3 is very good a redirecting URLs. The bad? S3 is so good that there are multiple overlapping options and you have to figure them out and keep track of what you have in place. The official guide is here, if you want a reference point.
In S3 > your site’s bucket > Permissions there is a set of options for Static Website Hosting. It looks like this:
Here you can choose to redirect and entire bucket to another URL — this is the most simple form of S3 redirection. It’s normally used to redirect and empty bucket, say www.mySite.com to the functioning one at mySite.com. This way you can have a site with a working www subdomain without actually maintaining that bucket.
If you instead opt to “Use this bucket to host a website” you can then use the S3 redirection rules. These are powerful XML-like instructions that can handle all sorts of URLs and URL sets along with specific responses (404, 301, etc.).
Note: These only work if you use the correct S3 endpoints!
These rules look like this:
These redirect rules are, too put it nicely, awful to work with. They are poorly documented and difficult to write. The syntax, to my eye, looks strange as well. But the good news is that they work and they are easy enough to diagnose when they go wrong — just keep pounding those URLs and looking for the desired response. Amazon has a few Routing Rule examples as well as the elements (that is, the available tools) posted, but I’ve found the docs to be very limited here.
Also note that, as with many parts of S3, there are some annoying quirks. For example if you wanted to redirect missing content to another part of the site you’d consider using a 404 code, but S3 won’t do a 404 unless you have permission to view the bucket (ie: s3:ListBucket permission on the bucket), so you need a 403 which then redirects to your desired destination.
Another policy might look like this:
if you wanted to add a hash-bang (#!) to your URLs where they might have been bookmarked without one.
Technically you can handle URL redirects using S3 Bucket Policies as well, but that can get truly painful and more complex than we’ll need for now. The important thing to know is that S3 Bucket Policies normally handle access to a site, but they can also do things like
x-amz-website-redirect-location which can redirect request as well.
In case you want to use this, make sure to try out Amazon’s Bucket Policy Generator, which actually works pretty well (though isn’t at all for new users).
If you want to get specific with your redirection and you don’t want to use the Routing Rules, you can set the metadata on on an individual files to redirect. This is manual, hard to track, and somewhat obscure, but it’s doable. Just navigate S3 to the file in question, go to Properties, and then find Metadata. Now select “Add Website Redirect Location” and then put in the destination — either something like /page1.html or another site altogether!
Sal Cangeloso April 29th, 2018
Posted In: AWS