Yesterday morning I was told of my friends frustration about migrating seventy nine domains into another account due to the merger and acquisition process in the company he works for. He told me that he migrated two hosted zone which had hundreds of records and he was copying it manually since AWS has import feature but not the export feature. Very sad !
He was also concerned about the potential downtime due to the propagation delays. However he was most concerned about missing emails.
I told him to create new hosted zone with same records and not deleting the old hosted zone for about a week so that even if it's pointing to the old Name Server you should be fine. Zero Down Time !
Then I looked for AWS CLI tool to export the DNS records
But first I configured AWS on my local machine with two profiles
aws configure --profile old
# and
aws configure --profile new
After keying in my credentials i verified if my auth was okay.
aws sts get-caller-identity --profile old
# and
aws sts get-caller-identity --profile new
You should get similar results with the both.
Now lets get all the records from the old account
aws route53 list-resource-record-sets --hosted-zone-id <your_hosted_zone_id> --profile old --output json
Cool! Now lets use the redirection operator to save the file
aws route53 list-resource-record-sets --hosted-zone-id <your_hosted_zone_id> --output json > route53-records.json
it saves the file in route53-records.json
Verify the file content with a text editor
vi route53-records.json
Now let's export the records to another AWS account
aws route53 change-resource-record-sets --hosted-zone-id <your_hosted_zone_id> –profile new --change-batch file://route53-records.json
Note that im using the new profile here. You get an error because the format is wrong
{
"Comment": "Optional comment describing the changes",
"Changes": [
{
"Action": "CREATE | DELETE | UPSERT",
"ResourceRecordSet": {
"Name": "example.com.",
"Type": "A | AAAA | CNAME | MX | TXT | etc.",
"TTL": 300,
"ResourceRecords": [
{
"Value": "value"
}
]
}
},
{
"Action": "CREATE | DELETE | UPSERT",
"ResourceRecordSet": {
"Name": "demo.example.com.",
"Type": "A | AAAA | CNAME | MX | TXT | etc.",
"TTL": 300,
"ResourceRecords": [
{
"Value": "value"
}
]
}
},
// Additional changes if needed
]
}
This is the correct format for the changes
Now lets parse it for compatibility. It's all json parsing now !
I created a script with nodejs
Important : Don't forget to remove the NS and SOA records because the new ones are already created for you in the new hosted zone.
const fs = require('fs');
// can also read with tthe fs
const records = {
"ResourceRecordSets": [
{
"Name": "example.com.",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": "110.88.60.99"
}
]
},
{
"Name": "example.com.",
"Type": "MX",
"TTL": 3600,
"ResourceRecords": [
{
"Value": "0 example-com.mail.protection.outlook.com."
}
]
},
{
"Name": "example.com.",
"Type": "NS",
"TTL": 172800,
"ResourceRecords": [
{
"Value": "ns-123.awsdns-00.net."
},
{
"Value": "ns-456.awsdns-90.com."
},
{
"Value": "ns-789.awsdns-17.org."
},
{
"Value": "ns-1011.awsdns-29.co.uk."
}
]
},
{
"Name": "example.com.",
"Type": "SOA",
"TTL": 900,
"ResourceRecords": [
{
"Value": "ns-000.awsdns-00.net. awsdns-hostmaster.amazon.com. 1 000 900 1234567 86400"
}
]
},
{
"Name": "test.example.com.",
"Type": "PTR",
"TTL": 60,
"ResourceRecords": [
{
"Value": "https://cognito-proc-eng-alb-123456789.us-east-1.elb.amazonaws.com"
}
]
},
{
"Name": "www.example.com.",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [
{
"Value": "example-loader.netlify.app."
}
]
},
{
"Name": "_2a72ae4a4be3676casdfd368c.www.exmaple.com.",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [
{
"Value": "_dc8dfasdf2395266f1asdfas44ca2e87c.xgxasdfxrgwpcb.acm-validations.aws."
}
]
}
]
}
var change_batch = {
'Comment': 'Multiple records change batch',
'Changes': [
]
}
records["ResourceRecordSets"].forEach((element)=>{
change_batch["Changes"].push(
{
'Action': 'CREATE',
'ResourceRecordSet': element
},
)
})
console.log(change_batch)
// Convert JSON object to string
const jsonString = JSON.stringify(change_batch, null, 2); // null and 2 are optional parameters for pretty formatting
// Specify the file path
const filePath = 'record_batch_changes.json';
// Write data to file
fs.writeFile(filePath, jsonString, (err) => {
if (err) {
console.error('Error writing file:', err);
} else {
console.log('Successfully wrote file:', filePath);
}
});
The Script is parsing the json object so that the expected format is built on the file "record_batch_changes.json"
Save the file as "parser.js" and run it with
node parser.js
Now Let's copy the processed records to the new AWS Account.
aws route53 change-resource-record-sets --hosted-zone-id <your_hosted_zone_id> --profile new --change-batch file://record_batch_changes.json
Important : Remember the profile.
If It says pending and you stripped the NS and SOA Records, You should be good to go. Verify it from the AWS Console.
All my Records for this domain were copied to the new Hosted Zone in the New AWS Account.
Next I created an array of list of other domains and automated everything in one shot.
const hostedZoneIds = [<id1>, <id2>, <id3>]
and looped the same for all seventy nine different domains.
Last and not the least I wrote another automation for Updating Nameserver Records on GoDaddy. This was counterproductive and took a lot of my time but automation had to be done 😂
This automation script saved a lot of human error and time for my friend.
At last I realised there was already a tool built for it (cli53)