Ipsec tunnels can be setup over the internet or over Direct Connect (using a Public Virtual Interface). In this case we are connecting over the public backbone of AWS. We will create two VPN tunnels from the Transit Gateway and connect them into a single instance of the Juniper SRX in the Datacenter. In a real production environment we would setup a second router for redundancy and for added bandwidth setup multiple tunnels from each Juniper SRX (or whichever ipsec device you use). Each ipsec tunnel provides up to 1.25Gbps. This is called Equal cost multipath routing. On the AWS side, up to 50 parallel (ECMP) paths are supported. Many vendors support 4-8 ECMP paths, so check with your vendor)
In the AWS Management Console change to the region you are working in. This is in the upper right hand drop down menu.
In the AWS Management Console choose Services then select VPC.
From the menu on the left, Scroll down and select Transit Gateway Attachments.
You will see the VPC Attachments listed, but we want to add one to connect our Datacenter. Click the Create Transit Gateway Attachment button above the list.
Fill out the Create Transit Gateway Attachment form. exactly as below (note: these choices will match our config of the router on the other side of the VPN tunnels)
For Inside IP CIDR for Tunnel 1 use 169.254.10.0/30 for CIDR.
For Pre-Shared Key for Tunnel 1 use awsamazon
For Inside IP CIDR for Tunnel 2 use 169.254.11.0/30for CIDR.
For Pre-Shared Key for Tunnel 2 use awsamazon
Once the page is filled out, click Create attachment at the bottom right.
While we are on the Transit Gateway Attachments page, lets go back to the top and give the VPN connection a name. Scan down the Resource type column for the VPN Attachment. *note: you may have to hit the refresh icon in the upper right above the table to get the new VPN to show. If you click the pencil that appears when you mouse over the Name column, you can enter a name. Be sure to click the check mark to save the name.
From the Menu on the Left Select Site-to-Site VPN Connections. From the main panel, you likely will see the VPN is in State pending. That fine. Lets take a look toward the bottom, and click the Tunnel Details tab. Record the two Outside IP Addresses. We want to record them in the order of the one pairing up with the Inside IP CIDR range 169.254.10.0/30 first. note: You can use cloud9 as a scratch pad, by clicking the + in the main panel and selecting New file. be sure to paste them in the right order!
From the menu on the left, Scroll down and select Transit Gateway Attachments. We need to verify that the attachment we created above is no longer in status pending. Instead it should be is state available like all of the VPC attachments in the list.
From the Menu on the Left Select Transit Gateway Route Tables. From the table in the main panel select Green Route Table. Lets take a look toward the bottom, and click the Associations tab. Associations mean that traffic coming from the outside toward the Transit gateway will use this route table to know where the packet will go after routing through the TGW. note: An attachment can only be Associated with one route table. But a route table can have multiple associations. Here in the Green Route Table, We already have one association, The Datacenter Services VPC. Click Create associations in the Associations tab. From the drop-down list, select the vpn. note:it should be the only one in the list without a Association route table . Click Create association.
While at the Transit Gateway Route Tables, take a look at the Propagations tab. These are the Resources that Dynamically inform the route table. An attachment can propagate to multiple route tables. For the Datacenter, we want to propagate to all of the route tables so the VPC associated with each route table can route back to the datacenter. Lets start with the Green Route Table. We can see all of the VPCs are propagating their CIDR to the route table. Since the Datacenter Services VPC is also associated with this route table, we need to propagate the VPN routes to the Green Route Table.
Click in Create Propagation on the field “chose attachment to propagate”, select the attachment of the VPN (previously named by you) and click in Create propagation.
Repeat the above step on the propagations tab for the Red Route Table and the Blue Route Table.
Take a look at each of the route tables and notice the tab Routes. You can see the routes that are propagated, as well as a static route table that was created for you by the CloudFormation template. That’s the default route (0.0.0.0/0) that will direct traffic destined for the internet to the Datacenter Services VPC and ultimately through the NAT Gateway in that VPC. note: there is also a route table with no name. This is the default route table. In this lab we do not intend to use the default route table.
Back on the Cloud9 browser tab, using the two VPN tunnel endpoint address generated from the step above, cd to tgwwalk on the Cloud9 bash console and run the bash script, ./createsrx.sh. note: Be sure to put the address that lines up with Inside IP CIDR address 169.254.10.0/30 for ip1.
Example from Site-to-Site VPN
cd tgwwalk
## ./createsrx.sh ip1 ip2 outputfile
./createsrx.sh 35.166.118.167 52.36.14.223 mysrxconfig.txt
note: AWS generates starter templates to assist with the configuration for the on-prem router. For your real world deployments, you can get a starter template from the console for various devices (Cisco, Juniper, Palo Alto, F5, Checkpoint, etc). Word of Caution is to look closely at the routing policy in the BGP section. you may not want to send a default route out. You likely also want to consider using a route filter to prevent certain routes from being propagated to you.
On the left hand panel, the output file should be listed. You may have to open the tgwwalk folder to see the txt file. Select all text (ctrl-a on pc/command-a on mac). Then copy the text to buffer (Select andy copy all text (ctrl-a and then ctrl-c on pc/command-a and then command-c on mac))
using a bash tab in cloud9, ssh back into the SRX. note: the ssh command for the SRX is given for you from the Exports menu in CloudFormation.
enter configuration mode, which will take you to a config prompt
root> configure
Enter configuration commands, one per line. End with CNTL/Z.
(edit)
root#
Once in Configuration mode note: you should see (edit) above the prompt, paste all text (ctrl-v on pc/command-v on mac) from the outputfile created in step 4. This will slowly paste into the configuration.
Once, the paste is finished, while you are still at the root# prompt, type commit, wait a few seconds, and then type exit and press enter.
Now lets look at the new interfaces: show interfaces brief st0. You should see new interfaces: st0.1 and st0.2 and they both should show up. *note: if they do not change from down to up after a minute, likely cause is the ip addresses were flipped in the createsrx script.
ec2-user> show interfaces brief st0
Physical interface: st0, Enabled, Physical link is Up
Type: Secure-Tunnel, Link-level type: Secure-Tunnel, MTU: 9192, Speed: Unspecified
Device flags : Present Running
Interface flags: Point-To-Point
Logical interface st0.1
Flags: Up Point-To-Point SNMP-Traps Encapsulation: Secure-Tunnel
Security: Zone: trust
Allowed host-inbound traffic : bgp
inet 169.254.10.2/30
Logical interface st0.2
Flags: Up Point-To-Point SNMP-Traps Encapsulation: Secure-Tunnel
Security: Zone: trust
Allowed host-inbound traffic : bgp
inet 169.254.11.2/30
root> show bgp summary
Groups: 1 Peers: 2 Down peers: 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
169.254.10.1 65000 374 415 0 0 1:01:54 Establ
aws.inet.0: 4/4/4/0
169.254.11.1 65000 373 414 0 0 1:01:51 Establ
aws.inet.0: 0/4/4/0
ec2-user> show route table aws
aws.inet.0: 14 destinations, 18 routes (14 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
0.0.0.0/0 *[Static/5] 00:01:42
> to 10.4.0.1 via ge-0/0/1.0
10.0.0.0/16 *[BGP/170] 00:01:37, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.11.1 via st0.2
[BGP/170] 00:01:29, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.4.0.0/16 *[Static/5] 00:01:42
> to 10.4.8.1 via ge-0/0/0.0
10.4.0.0/22 *[Direct/0] 00:01:42
> via ge-0/0/1.0
10.4.0.12/32 *[Local/0] 00:01:42
Local via ge-0/0/1.0
10.4.8.0/21 *[Direct/0] 00:01:42
> via ge-0/0/0.0
10.4.8.11/32 *[Local/0] 00:01:42
Local via ge-0/0/0.0
10.8.0.0/16 *[BGP/170] 00:01:37, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.11.1 via st0.2
[BGP/170] 00:01:29, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.16.0.0/16 *[BGP/170] 00:01:37, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.11.1 via st0.2
[BGP/170] 00:01:29, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.17.0.0/16 *[BGP/170] 00:01:37, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.11.1 via st0.2
[BGP/170] 00:01:29, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
169.254.10.0/30 *[Direct/0] 00:01:54
> via st0.1
169.254.10.2/32 *[Local/0] 00:01:54
Local via st0.1
169.254.11.0/30 *[Direct/0] 00:01:54
> via st0.2
169.254.11.2/32 *[Local/0] 00:01:54
Local via st0.2
...
Notice that there is only one next-hop address for each of the VPCs CIDRs. We can fix this by allow Equal Cost Multipathing (ECMP). Back in config mode we will set maximum-paths to 8 in our BGP router:
set routing-instances aws protocols bgp group ebgp multipath
Now, run sh route table aws command again. See, both the tunnels are showing up!
ec2-user> show route table aws
aws.inet.0: 14 destinations, 18 routes (14 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
0.0.0.0/0 *[Static/5] 00:04:20
> to 10.4.0.1 via ge-0/0/1.0
10.0.0.0/16 *[BGP/170] 00:00:05, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
to 169.254.10.1 via st0.1
> to 169.254.11.1 via st0.2
[BGP/170] 00:04:07, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.4.0.0/16 *[Static/5] 00:04:20
> to 10.4.8.1 via ge-0/0/0.0
10.4.0.0/22 *[Direct/0] 00:04:20
> via ge-0/0/1.0
10.4.0.12/32 *[Local/0] 00:04:20
Local via ge-0/0/1.0
10.4.8.0/21 *[Direct/0] 00:04:20
> via ge-0/0/0.0
10.4.8.11/32 *[Local/0] 00:04:20
Local via ge-0/0/0.0
10.8.0.0/16 *[BGP/170] 00:00:05, MED 100, localpref 100, from 169.254.11.1
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
to 169.254.11.1 via st0.2
[BGP/170] 00:04:07, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.16.0.0/16 *[BGP/170] 00:00:05, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
to 169.254.10.1 via st0.1
> to 169.254.11.1 via st0.2
[BGP/170] 00:04:07, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
10.17.0.0/16 *[BGP/170] 00:00:05, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
to 169.254.10.1 via st0.1
> to 169.254.11.1 via st0.2
[BGP/170] 00:04:07, MED 100, localpref 100
AS path: 65000 I, validation-state: unverified
> to 169.254.10.1 via st0.1
169.254.10.0/30 *[Direct/0] 00:04:32
> via st0.1
169.254.10.2/32 *[Local/0] 00:04:32
Local via st0.1
169.254.11.0/30 *[Direct/0] 00:04:32
> via st0.2
169.254.11.2/32 *[Local/0] 00:04:32
Local via st0.2
...