The Three Steps of Audio

Before I entered the tech world, I spent a decade running audio, video, and lighting for events. During that time, one of the best sound technicians I know taught me a simplified strategy for running audio gigs.

Here’s how it works:

Figure 1- 3 Steps of Audio

Figure 1: 3 Steps of Audio

It is so simple.

Step 1: Get any sound coming from your source out to the speakers. During this step, the quality of the sound does not matter. Now that you have sound, you want to make sure you don’t lose it.

Step 2: Ensure that the sound can be controlled and available through the entire event. Checking microphone batteries, confirming that cable connections are secure, and ensuring you are not overpowering circuits are all important tasks during this step.

Step 3: Now that you know that the sound will be running smoothly, make sure it sounds nice. This is where the sound technician’s experience comes into play. They work on audio levels and equipment placement to make sure the audio is crisp and any feedback is removed from the system.

This three-step system can easily be applied to other areas, from cleaning out a junk closet to writing a blog to working in tech.

Three Steps to Build Great Code

A few months ago, I realized I could actually also use it at work with Amazon Web Services (AWS). Here’s how it works for tech:

Figure 2- Three Steps to Build Great Code

Figure 2: Three Steps to Build Great Code

Let’s walk through the three steps to help get a web app into AWS, a common challenge we solve for our clients. We’ll take the case of one of Castlerock’s new clients who needed to get their web app into AWS to reach thousands of users.

Step 1: Get Code Running…and Fast

We’re going to need a server to run the code and a database to store the data. That’s it – no bells and whistles quite yet. Since this client is new to AWS, we are going to create a Virtual Private Cloud (VPC). We are going to split the VPC up into 2 Subnets: a public-facing and a private-facing subnet. To further secure these subnets, we will add a NAT Gateway. In the public subnet, we can use Amazon’s Elastic Compute Cloud (EC2) for a server to host the code. In this scenario, we have a SQL database that needs to be put in the private subnet that only the EC2 can access. With SQL, our fastest option is Amazon’s Relational Database Service (RDS). We can register the Domain and DNS through Route53. With that, we should have a running application.

Figure 3- Step 1 - Simple web application build

Figure 3: Step 1 – Simple web application build

Step 2. Keep Code Running

Our client’s application is now up and running! This is great news. But what happens when the traffic to the site picks up? If the single EC2 instance cannot handle the load from thousands of users at once, the site will slow down and maybe even eventually stop. Or there might be too many queries being run against the database at once. If this happens, the database will slow down drastically and affect all of the users.

We have to be prepared to prevent this from happening. Starting off, we could spread the traffic between multiple EC2 Servers. To do this, we are going to put the servers behind an Application Load Balancer (ALB) and an Auto Scaling Group (ASG). The ASG will be split between multiple Availability Zones to make sure the application is less likely to go down if something happens to the physical servers in a zone. The Load Balancer will make sure the traffic is split between the Availability Zones. And the Auto Scaling Group can be set to kick off another server if one of the servers fails.

The next thing we could do to protect the database from failure would be to switch from a regular RDS database to an Aurora database. Aurora allows you to split the database into multiple instances of the database. One of the instances allows writing to the database while the rest are read only. This prevents the write queries from slowing down the read queries.

 

Figure 4- Step 2 - Web application behind a Load Balancer

Figure 4: Step 2 – Web application behind a Load Balancer

Step 3. Make Code Run Better

Finishing Step 2 should allow the application to stay up and running, freeing us up to build and improve upon what we have, which is what Step 3 is all about.

We first ask ourselves: How can we improve the application’s infrastructure to make it easier for our client to update the code and make it easier for use in the cloud? First, we decide we will replace the EC2 instances with docker containers hosted in Elastic Container Service (ECS). This will allow for a faster way to deploy updates that can be set up to better scale with traffic and be more cost efficient with the autoscaling. There is also potential to use Lambdas instead of servers to handle some of the functions in the application. This could potentially reduce running costs as well.

Automating the build and deployment of code is another way to improve the process. We can add the CodePipeline service for this. CodePipeline could monitor the Git repo where the code is stored and kick off a build when the code updates. When the build is done, we could then have the pipeline deploy that code to the ECS service. This can all be accomplished with little to no interaction from the client.

These are only a few ways we could improve the application and improve it for our client. As long as they continue to grow, there are infinite ways we can use AWS to improve the application to make it more secure, reliable, performant, and cost efficient.

Conclusion

Most programming tutorials start out by having you write a simple “Hello World!” function. Then you try to run it. After that, you may update the function to do more and work better. This three-step process reflects that strategy: We will get something working. Then we will keep it running. Finally, we will make it more efficient.

With the speed that software changes and improves, it can become overwhelming. Next time you are trying to build too much at once, try stepping back and simplifying the process with the Three Steps to Build Great Code.