If your Node.js applications are deployed as a Docker container in production, doesn't it make sense to develop inside a container as well?
Follow along with this live-coding session to learn how to build a Node.js remote container-based development environment on AWS using Visual Studio Code Dev Containers, including support for step-debugging and development-specific package installation.
https://www.youtube.com/watch?v=_h9dz6drUNw
Also check out our Visual Studio Code Remote Dev Containers on AWS Set Up Guide to learn how to configure your local and remote machine and you can find the code used throughout the video on GitHub.
If you have questions, you can find me on Twitter.
I'm Ryan Blunden, developer advocate at Doppler and I'm really excited to be presenting to you today, a pretty ambitious topic, really. Not just dev containers on the VS Code, which is a very new technology, but how to actually run dev containers remotely on AWS. So, I'm going to share my screen and we're going to get straight into it.
Now, because this isn't your typical webinar, we've got a bit of a to-do list to get through. Now, you don't need to necessarily read all of these things, but this just gives you an idea of what we're going to try and tackle in the 45 minutes or maybe 42 minutes that are left for the session today.
So, just to start off with a really quick intro in terms of the application. The application is called a Mandalorian Gifs Generator. And it's a really simple app that, as you might guess, randomly generates Mandalorian Gifs. I'm a big believer that if you want to learn something, you should build a real application and test it out that way. So, if we go to localhost:8080, and take a look. This is the sort of thing that we're going to be deploying in our remote container. Cool. So, let's stop that and then heading back to our list. The other thing I want to show you quickly is what does it actually look like to launch a dev container? Now, I'm going to be using this with Docker locally because that's a bit of way to save time.
And so, if we bring up the command palette in Visual Studio Code, we can see here command to rebuild and reopen in the container. So, this is the exact flow that you'll do if you've got your code cloned locally, and then you want to launch that code in a container, that is the workflow that you will do. And let's see. Ah, we have some sort of error. Now, what happens is when you've got an error, you will then bounce back to Visual Studio Code, so you can make that change and then go back into the container flow. And I have a suspicion that the reason why this is happening is to do with volume mounting. Now, if there is one area of Visual Studio Code in getting this all to work that is the most challenging, it is by far mounting your code in the container.
I'm going to get into this code a lot more later, but as you can see here, I'm using the workspace mount that will be in the virtual machine. Here, I'm going to be using the local version of that. And if you are going to start off with dev containers, I definitely recommend doing it locally first with a local installation of Docker, just because there are a lot of moving pieces. What we can see here is once we have opened the container on Visual Studio Code, this is the really cool part. This is why Visual Studio Code and Microsoft and GitHub are going to succeed in that dream of having a completely remote development environment that still has all of the features that we're used to. So, we can see here that I've got all of my project files. I've got an integrated terminal except this terminal is actually running inside the Docker container.
And what's really cool is because it's Visual Studio Code, I still have all of the features in terms of step debugging and all of those sorts of things that I would normally, if I was, using Visual Studio Code and everything was running locally. So, I'll reopen this code locally. Now, I'm from Doppler and Doppler is a secrets manager. The purpose of this all isn't really going to be to feature Doppler but just we'll be touching on some things related to dotenv files, which can make things a little tricky with Visual Studio Code. So, if you are going to use Doppler with Visual Studio Code, which I'd highly recommend you check out, what you'll need to do is launch Visual Studio Code with some environment variables that Doppler needs in order to configure itself so it knows which secrets to pull in for which project.
To give you just a really quick peek at what Doppler looks like, essentially you have a list of projects, this one's Mandalorian Gifs, and just like an env file, here are all of the keys and the secret values over here as well. So, we'll see if we can get to doing a brief demonstration of Doppler later. Okay. So, I'm going to [inaudible 00:06:18] this out and now let's get back to our presentation. And I keep checking on time. All right. So, what I want to do is just really run you through the steps that you'll need to go through in terms of setup. Ideally, if you want to follow along, or if you want to do this afterwards, you'll clone the repository. And if you want to check that out, it's at Doppler HQ and then just search for the Mandalorian Gifs node repository. So, once you've done that and you've cloned it locally, then what you would do is just do what I did then.
You will use Docker and the Visual Studio Remote dev containers to try this out locally. And the reason you want to try it out locally initially, is because as you saw, every time it goes from standard Visual Studio Code to the dev container it's launching containers in the background, it's doing a bunch of setup. And as soon as you move this to a remote host, it becomes a lot slower. So, it's just like when you're developing applications, you want a really fast feedback loop. You want that for the dev container development process as well.
The next thing that you'll need is you'll need SSH auth set up. Now, if you don't know a great deal about open SSH, that's a topic in and of itself. Microsoft does have some good resources on learning how to use open SSH and the reason why we need this is because the easiest way for our local Docker CLI to control the Docker host that's going to be on AWS is through an SSH tunnel. We can do it, otherwise, using TLS and TCP and sockets, but that's kind of really complicated. So, we're going to be using SSH. So, ideally, you understand, you've used SSH before and you already have an SSH identity set up because that's what we're going to use to be able to connect to the machine.
Next, you're going to need some AWS credentials. Now, the good news for running this in the cloud is you don't necessarily need to do things like EC2 instance with security groups and assumed IAM roles and stuff like that. That's perfectly good for production workloads. All you need and what I would recommend is a Lightsail instance. So, Lightsail is kind of like DigitalOcean on AWS, where it makes it easy just to create an instance. You can directly edit the network ports right in this UI here, and it just makes it really, really simple. So, that's what I would recommend, and it's not too expensive as well.
And then the optional step is installing the Doppler CLI locally. If you want to do that, it's really, really easy and what's good is it's free to get started. So, you don't have to enter a credit card or anything like that. And you still get all the features you need if you wanted to use Doppler in the production. And if you had to install page, you'll see how to install the Doppler CLI for, basically, every operating system imaginable and we've got first-class support for windows as well.
Okay. Now, let's get into some concepts. Now, I'm going to presume that because you are attending this webinar or you're watching it after the fact that you know what Docker is, you know what remote development is and you're on board. You want the future sooner. Now, if you're not sure, the reason why remote development is so compelling is for a couple of reasons. One is that if your application is deployed in production in a container, then it makes sense that your development environment is as close to that as possible. Because if you're using a different version of node locally, because that's what comes with Homebrew and that's different from what you're using in your container, that's just like an example of an opportunity where unexpected things can happen when you deploy your application in a production environment. Now, in terms of why development containers as opposed to virtual machines, GitHub and GitHub Codespaces is going to work in a container.
And one of the cool things that they're going to be able to do as a built-in service I imagine is you will be able to check out a teammates branch as part of reviewing a pull request. That's going to open in a remote dev container. And as long as there's a way to get the secrets that that application requires, so it can start off and it configure itself, you can actually preview the application for that pull request is for, without having to bring that down locally, do a whole dance and all that sort of stuff.
So, that's what Visual Studio Code and GitHub is moving towards, and it is incredible for that. You don't necessarily need to have a project locally and rebuild the container. You can optionally just say I don't want you to build a container. I just want you to use an existing image and then just launch me into that. So, it's pretty amazing technology and that's where the future is as well. Being able to spin up a container development environment in minutes without having to do much work at all, because the hard work is just getting Visual Studio Code to be able to control Docker. All right. So, we've talked about why remote containers.
Now, if you go to Visual Studio Code's site, there is an entire section dedicated to just remote development in general because there's a couple of options to get started. One is developing just remotely on the VM itself versus in a remote container. Now, I would say that remote containers are where you want to end up or perhaps just local containers, but doing remote development over SSH is a good way to start because at least it'll expose you to tools such as the Remote Explorer, you'll get to configure open SSH. So, that is a great way to get going. In terms of, you know, which one is better, containers are great because they're more lightweight. You can run different sorts of applications and you can run applications that have multiple containers as well.
But remote VM, that's still remote development. So, that's super cool. Now, really briefly, to sort of demystify the magic that you saw before, I want to really briefly touch on how Visual Studio Code itself is architected because it's not necessarily working the way that you think. Now, focusing on this diagram here, the way that it works, Visual Studio Code is running on... Obviously, it's running on my machine. But only the themes that it needs to run locally when we're actually running a container. So, things like any theme I have, that is going to be running locally, but the things that are going to be running remotely is the VS Code server. And here, when we see open exposed port, it is open SSH that is providing a tunnel into the VM and then into the container itself.
This is really powerful. This is why for instance VS Codes, JavaScript and TypeScript language server is able to run remotely and give us all of that code intelligence. So, all of the heavy lifting is being done here. This is essentially almost just a thin client text editor at this point. Visual Studio Code is also taking care of doing things like port-forwarding from our local machine to inside the container. So, we can just hit our application localhost:8080 but it's tunneling that through to the actual container. So, the amount of complexity that Visual Studio Code is taking care for us is huge and the fact that we can run extensions from Visual Studio Code just about anything inside our containers as well is just phenomenal. So, this isn't a stripped paired back version of remote development in VS Code, they are trying to give you the full, entire experience.
Okay. The next step and I won't have a chance to go into this in too much detail, but it's a similar thing with Docker. So, if you've used Docker locally, it's easy to think that Docker is just like an encapsulated application, but it also has a client and a host. And so, what happens is, at the moment on my machine, let's say, when I do Docker run, everything is running in a virtual machine on my own machine.
But what I can do is I can configure the client and say, "Client, I actually want you to control this different Docker host over SSH." And there's a way to configure Doppler... Sorry, Docker, with things called context. That's what enables you to say, "Hey, Docker, I want to use this remote host," or, "I want to switch back to doing something local." So, that's just a really cool feature to learn about Docker that is going to come in handy because obviously when VS Code is running at dev containers, we want to run the dev containers on our remote VM.
We're all good. All right. Let's keep cracking on. How am I going for time? Yeah, not too bad. Okay. Now, some of these steps for you, they might be very pedestrian, very boring, but I want to try and set everything up from scratch, A, because if something goes wrong, that's an opportunity to learn. And, B, it's important to see how all these pieces fit together because that's not something I've seen in any other webinar or tutorial so far. And that's why I was so excited to share this with you.
Okay. So, the first thing that you need to do is when you go in and create the instance, go into your account and add your SSH key. Now, the reason that you want to go and add your own SSH key, it's so that you don't have to use the one that they provide, you have to download it, set the right file permissions, use the right flag on open SSH. You don't want to have to do any of those things, or you can learn how to add it to your identity, but then that's tricky because it doesn't do it over machine restarts. So, chances are, if you're a developer, you've already got an SSH key. So, just upload that, it's going to make things so much easier.
Now, if we head back, we're going to create our instance. Now, in terms of how big your instance is going to be, well, I guess it depends on what you're going to be doing. If you're running all the services in a monorepo, then you're going to need something really beefy. And that is probably one of the biggest benefits of developing remotely as well. Even if you've got a massive specter MacBook Pro that still may not be able to launch every container that you need. So, in terms of which machine, it really depends on the demands, it depends on how much your boss will let you spend, it depends on a lot of things. So, I'm going to choose just this one. It's pretty cool for CPU's. That should definitely do it. Check that the SSH key that you have uploaded is the one that's selected here. It's going to make it a lot easier. Let's do DopplerDevContainer and then the other thing that we'll want to do is add a launch script because what we need to do is we need to install Docker and get that all going.
And so, the easiest way to do this... This script, by the way, all of this code is available here, if you go into the AWS... Oh, that's interesting, and then you can just grab it here. All right. So, what we're going to do is I'm going to grab all of this code, go back to Lightsail, paste it in here, and then it's going to execute that as part of setting the machine up. So, we'll create the instance and this will take about three to five minutes, something like that. Okay. So, still booting up, still installing Docker and things like that, what I'm going to do is I'm going to copy this IP address and I'm going to create a host alias. Now, the reason why I do this is I'm not good at remembering things in general, so, I don't have much hope of remembering an IP address but what I can do is go into a regular terminal and it's not code.
Sudo nano, and I'm just going to make an entry in the host's file. Now, maybe you haven't used a host file before but it's a really cool way of just taking an IP address and adding an alias. So, instead of having to refer to this when we set up our SSH connection, we just use this instead and let's save that. Cool. All right. So, now this isn't going to work because I don't think the port for ping has been set, but you can say that it did resolve to that IP address. All right. So, let me keep going back to my list. We've done this. We're creating the virtual machine, that's a work in progress. We've created our host alias, and then the next step it's configuring the VM. So, what we can do is, by now, we should probably be able to connect to it over SSH. User is Ubuntu because that's the Ubuntu distribution and our machine alias is AWS dev.
Alright. And we are in. Let's see if Docker is installed. Hey, it's installed. This is a good sign. Let's now verify, if I go back to my instructions that it is working. Let's see. Let's go back here. That's not what I wanted to do. Let's try that again. And do we get, "Hello world?" Hey, we get, "Hello world." Okay. Now, for those of you that are systems admin inclined, you'll notice that I was able to run Docker, which is a privileged thing to be able to control on a system, but it's using the Ubuntu user. Now, the reason why we need to do that is Visual Studio Code will be using the open SSH connection, which uses the Ubuntu user. So, therefore, we had to give Ubuntu user permissions to be able to control Docker, and we certainly didn't want to make it possible for the root user to connect over open SSH.
So, that's just a bit of a necessary thing that we had to do. All right. I think we've gone pretty well on time. Let's see. So, we've tested that. We've configured our VM. And the last thing I should check is that we have the code for this on the virtual machine. Okay, which we did not. So, let's go and clone that down. Oops. Okay. Now, the reason why we're cloning the code for this repository, even though we're going to be working in a dev container is because one of the most challenging aspects of dev containers is when you've got your repo locally, and then you want to be able to develop on it remotely, that code has to get to the remote Docker server somehow.
Now, when Visual Studio Code builds your dev container, it's not doing anything magic, it's basically looking for the Docker file, and from scratch, it's going to go through all of these steps. Now, because we can see here that it's copying the code that's inside source, it is going to do that initially locally, but then once you're in the cloud, once you're running your container remotely, well, it doesn't have access to your local files anymore. And there's no way to mount your local files to the remote source. That's only something you can do if you're using Docker locally. This is probably the most challenging aspect of this, so if you've got any questions, send me an email at ryan.blunden@doppler, and I can point you in the right direction.
Okay. So, the reason why I've done this is a couple of reasons. One is I can authenticate this machine with GitHub or my code host, so I can go through all of my Git workflows here. And what we're also going to do is we're going to just Mount this Mandalorian Gifs node path into the container, because that's so much easier than using like a volume where it's difficult to then get those code changes in. It's already got a Git repository so Visual Studio Code is going to be able to pick up those changes. It's just a much, much easier way to go. I'm going to send you some links after so you can delve into the different ways of doing this, but to get started with remote development, which is definitely in the advanced part of things, [inaudible 00:23:18] in the VM and then mount that code from the VM into the dev container.
All right. There's a lot to remember, which, hopefully, this to do list concept is working for you. All right. So, now that we've got our VM in the cloud, now we need to configure Docker so it is able to control that remote host because that's what Visual Studio Code is going to do for us when it creates the dev container. All right. So, remote SSH, we've already tested that, that works. Now, let's create the Docker context. So, because I have my AWS dev alias, that's what I can use here. If you're using an IP address, then you would put your IP address in here. So, I'm going to copy this code and then run in the shell. Now, the reason why I'm not running it, I can just as easily run it in the shell, so, let me start doing that.
Okay. So, what this is going to do is it's going to create a new context, which is a new configuration for what the CLI is pointing out to control in terms of a Docker host. And here we are saying that we're going to connect to it via SSH. Now, I will say that sometimes there can be performance challenges using open SSH and chances are, if you're like DevSecOps team or your DevOps platforms team is setting this up for you, they may also set it up using TLS. So, you're communicating directly with the Docker daemon, but for our purposes, and this is also great for security, this is how we're going to go. Now, in terms of being able to inspect the different contexts that we have, we can see that we have AWS dev and then the default is from Docker desktop, which is where... And we can say that that's the one I'm using because it has the little asterisks next to it.
In terms of Docker configuration, you can either do it this way or you can do something called using the Docker host setting in VS Code. Now, this is where things get a little bit tricky too. The reason why you might want to use docker.host in VS Code instead of creating the Docker context is because if you do it this way, it means that Visual Studio Code will respect the setting that you put in here, but it's only specific for Visual Studio Code. So, for instance, if you want it to use dev containers remotely in here, but then when you run Docker containers in your local shell, that can still point to your local Docker instance. Personally, I try to stay away from Visual Studio Code settings if they're not for the workspace as much as possible. And it's really not difficult to be able to set the Docker context. You can just say docker context use AWS dev. And now we're using AWS dev context.
Then if you wanted to switch back to use your local Docker instance, then you just do that. And the great thing is that this setting is consistent between Visual Studio Code and your system as well. So, I think it makes much more sense just to use it directly here.
Okay. So, then the next check, let's actually try running a container, running it locally, as in triggering it locally, but we should be able to observe that it is actually running remotely on our VM. So, we'll see if this works. You'll notice now that there's a bit of a delay and that's because it has to go through open SSH and we can see, yeah, it seems to be working. Actually, I just want to do this again, just to prove this is a real demo into Docker PS, and once that is up and running, see, hey, we're in the machine and we can see that container running.
Cool. All right. So, that is basically everything covered in terms of Docker configuration. As I said, the remote SSH part and configuring that authentication between your machine and the VM, on Windows, this is a lot tricky than it is on Linux or Mac, depending upon if you're using WSL, depending upon the [inaudible 00:28:05] SSH client, if you're using a terminal such as Git Bash or the integrated terminal in Visual Studio Code, there's a lot of different options, and so be prepared to go on a bit of a wild ride to get that set up, but then once it's set up, it is super easy. Okay. So, now we're kind of getting to the good stuff, the dev container related stuff. Let's go through and show you all of the extensions that you need to install. To save you having to manually search the extension store, I can just share with you this. This is what I want to do. It's not Q&A messages. If that was message, where would I be?
[inaudible 00:29:01] There we go. All right. [inaudible 00:29:10] Okay. All right. So, I'm not going to install a whole bunch of extensions. It's really just these three. And let me close that. And the extensions that you absolutely have to have, this is going to be done for you, so you don't really need to worry about that, but the Docker extension and the remote containers extension, obviously, so we can have remote containers. Now, the Docker extension is really cool because if you want to do things like clean up the dev containers, clean up any images that are created, then you get this random little view here and you can do things like remove this image or remove a container. We're not really going to be using that too much. So, we've done that. We did the dev container locally before, so we don't have to worry about that.
Let's see here. Let's now take a look at the remote UI as well. This remote UI is a relatively new feature and it is also brilliant. Now, it's different because the Docker UI is really just showing us things about Docker, surprise, surprise. The Remote Explorer, what's really cool about this is that it understands dev containers opens as stage VMs as well. Things like WSL instances. So, this is your sort of like map for anything that you might be doing in Visual Studio Code that's happening remotely. Okay. Let's do that. Now, let's quickly talk about the dev container workflow. What I found really challenging when I was learning about dev containers is that there is so much documentation and it's really overwhelming. But the thing is, there's a lot of things that Microsoft are doing to make things as easy as possible for you.
So, while it is a bit of a steep learning curve, it's not too bad once you get your head around the overall concepts, which is really the point of this webinar. So, essentially what you will do, I'd recommend that you're using Docker locally. So, you'll be modifying your dev container file. And just to quickly show you what that looks like. So, you'll be doing things like... You'll be, perhaps, modifying your Docker file. You'll be creating a command that runs when the container first starts like installing dev packages, configuring ports, all that sort of stuff.
So, there is a lot of like tweak a dev container, rebuild it, it didn't work, go back to the code, tweak it, rebuild. And essentially that's what is going on in this workflow here. So, you open it, did everything work? Did you get an error? Okay. Do some success, then reopen it again. That's why doing it locally you'll get a much faster feedback loop and be prepared for a lot of these when you first get going.
Okay. Dev container syntax. Let's go cover that. So, the spec for dev containers is huge, and that's because there are so many different things that you can do. You've got build args, build targets, a lot of the same things that you've got when you actually build your container. You've got the container env, you've got remote env and you've got things where it's like, what is the VS Code server should start as? Should that be different to the container that it's actually going to be running in? One thing that they advocate for, which is good, is running as a non root user, reason being is that if you run as a root user in your container, because we are mapping the code from the host into our container, we have... We'll just go in here, you can see that all of this code is owned by Ubuntu in the Ubuntu group.
If we run our container as root, any files that we create in the container are going to be owned by root here as well. Now, in the resources I'm going to email you, there is an entire section on container user scenarios, such as making the user IDs match. Yeah, it's a whole thing but I've got a link in there to help you out. All right. So, getting back to the dev container syntax. It's reasonably simply to get going once you understand how everything works. So, let me just go through the most essential things for you to understand when it comes to the syntax here. First of all, you need to choose whether you're going to be building your image from scratch each time, or whether you just want to use an off the shelf image. Most of the time you'll want to do this.
This might be a good option if you're just wanting to quickly review a pull request and you don't need to necessarily rebuild the image just to do that. You can change the container user to be root, but what I would do is I would stay away from this until you know you need to. The reason why you might need to, in my example, all my command is doing, it's just installing the dev packages, such as Es Lint and Prettier and things like that. So, I don't need necessarily root access, but if you are installing things like a compiler like GCC, then you'll need root access to be able to install those system level packages. This is where we specify the command that we are going to run. Forward Ports, pretty straightforward, no pun intended. These are the ports that your application is going to be listing on.
Now, you can define the workspace folder. Now, by default, I believe this is just root workspace, which is this. Now, this is fine. Don't feel like you need to change this. The reason why I'm doing this is because I set up a path that is looking for the nodemon binary in node modules. That's why I need these to be the same. That's just me, you don't have to do this. Then, this is probably the most important part. You have essentially three ways. I touched on this really briefly earlier, you have three ways in which you can mount code in here. One is use a volume. This is great to get started. It's also awesome because you don't even need access to the file system here because Docker is going to be creating an inner volume.
And then what happens is Visual Studio Code takes that volume and then mounts it inside the container. Now, this is cool, but it's quite advanced because then you need Git credentials inside the container in order to be able to create [inaudible 00:35:44] Git operations as well. So, that's definitely on the more advanced side of things. This is the version that we're going to be using because it's going to be mounting the code that's in here into this here. And if you've ever used a volume mount into a running container for like local development purposes, this is exactly how you would do that. If you're doing things locally, then you just want this because you just say Visual Studio Code, wherever my workspace is on my machine, then just map that into here. All right.
So, yeah, as I said, because we're going to be running this remotely, we'll go with option number two. Now, you can also set environment variables for your container. Now, in this instance I've got the Doppler CLI installed locally. And the reason why I use Doppler and I'm not just being the developer advocate, but it makes it so much easier to manage your secrets. If I can just give you a really brief overview of why to demonstrate what I think is like the killer features to use in development, A, you get a really nice UI where you can go and change things, you can do things like add notes to secrets. And what's great is because Doppler is like a central source of truth for secrets. This is what your teammates are also going to be pulling from as well.
So, for instance, if they have just merged code and their code relies on a new config value called new secret. Well, if you were using the dotenv files, you have to have a way of getting this new secret variable out to everyone on your team. This might be over Slack. You might even be uploading just your whole env file for one to look at. That's not great for security, and it's just a manual process. Everyone has to then go and grab that and put it in their dot.env file. If you use Doppler and you make this change and save it, then the next time that you use Doppler to run your application, it's always going to fetch to the latest version of your secrets. So, that just makes it so much nicer. The other thing that you can do as well, which is particularly handy in development, because what happens is I'm working on a project stuff and Brian, they're working on different, different features, and they might need to override a particular setting.
Now, if we go in here, yes, they can change it here, but this is going to change it for all of the developers. We have a concept of branch config. And what that enables me to do is I can then come into this branch config and let's say, I can change it to G, I hit save. And then if I'm pulling my secrets from my particular config, it means that I will see this rating, but my teammates will still see what is in the dev config. And you can see that this little icon here saying that I've overridden that value. So, just a couple of reasons why you might want to consider using a secrets manager instead of dotenv files. And we're going to be talking about dotenv files in future articles on Doppler site, so, definitely go and check those out.
If things start to get weird and they just don't work in terms of step debugging, this is something you can try. As I said, this really is bleeding edge technology, so sometimes things don't always go as planned. And then here's just an example of an extension that's actually going to be run in the remote container. So, I'm going to be able to use Prettier to look out my Prettier [inaudible 00:39:05] I config and reformat my code, but this is all happening remotely inside the VS Code server. Okay. So, that is the basic syntax. And look at that, we're almost out of time, but that's okay. I only got a few more things I really want to talk about. So, we've talked about the container syntax. I've given you a crash course on the code mounting, but now let's actually do something cool, and look at step debugging.
So, to do step debugging, let's actually rebuild and reload this container, this time in the remote environment. So, what's going to do, it's going to show you all of the logs. For the most part, this is not going to be very interesting. Here we can see that it is rebuilding the container and it does this from scratch in order to make sure that if you've changed anything in the Docker file, that's going to be reflected in the container that it ends up running for you. While that is doing that, I want to talk about a couple of other things as well. So, the Visual Studio Code is cool in that it... Let me move this. It's cool because it sets up the local tunneling. So, I just lost my train of thought. It sets up the local tunneling, which is cool, but that can be a little bit slower because it's going from your machine tunneling over open SHH to the remote host and then tunneling into the container.
So, there's a lot of sort of like jumps that happen in there. If you haven't used it before, there's a great little product called ngrok. It's free to use initially, but they do have paid plans as well. And what this does is it's tunneling as a service. So, what's particularly cool is if you wanted to do something like test out Doppler webhooks for order reloading when secrets change, you'll need an HTTPS address for that. And so, you can run just your app on port 8080 just over HDP, and then ngrok will actually take care of creating a tunnel directly into that. And because it's a tunnel from ngrok servers directly into the container, it's much faster to preview your app and I'll be able to give you a demo on how that works.
Okay. So, this is definitely going to be a bit slower, the first time you start out, because there's a lot of machinery the Visual Studio Code needs to install. So, for us, not just the dev container, but all of the VS server infrastructure as well. Okay. So, far so good, now it's running our dev container set up command and, hey, now we can start to see our code in Visual Studio Code. This is, I think, the holy grail that developers have been looking for. We have a very consistent editor experience, including being able to have our launch configurations, but this is all running on the server.
Now, when I'm doing like training on these sort of things, I always like to prove that there's no magic. So, if we head back over here and we go into the Mandalorian Gifs code, if I do that, and then if I touch new file and we head back in here, what we should see is new file. And if I delete that, we should see it deleted from here as well. It's super cool, right? Almost like magic, but because you guys know what's going on, it's not magic. Okay. So, let's go back. What was my next step? Okay. Now... We'll, wait. So, the way to think about this is it doesn't necessarily look at your Docker file, its goal is not to automatically start off the container. What it's done is it's essentially created a development environment for you. And so, now we create a terminal. This terminal is also in the context of the container.
And so, now what we want to do is we want to run our application. So, let me just see if... Oh yeah. So, there's weird things like this that happen, bleeding edge technology. Let's see. Okay, great. So, Doppler is working. So, I'm going to run this application using Doppler because it takes care of an issue with dotenv files related to the host name, environment variable and I'll show you that problem in a second. Okay. So, we are launching our application and fantastic. Everything is working brilliantly. Now, we can open this in the browser, and this is tunneling from our local machine through the open SSH connection, then through the container on the host. So, the performance isn't terrible, but it's not the best. And so, this is why using something like ngrok can speed things up immensely.
So, let's go and set that up. And also, one thing I should say too is... Yeah, we're definitely over time. I'm going to keep going for five minutes. If you need to drop off, no worries. This is all going to be covered in the recording, and we'll send you a link to that. So, one thing that I'm a big fan of is making sure that everything is self-contained in your package of [inaudible 00:44:38] and so here we've got all of the instructions for setting up ngrok and the way I've built this repository, obviously this isn't a real app that we're going to deploy and create a business around, but all of the code that you'll need to learn, I make sure that everything is here, including things like how to set up your Lightsail instance with Ubuntu. So, if you are someone that just wants to go and just check out the code and learn that way, go for it.
So, this is working. This is all well and good. Now, I want to show you how do you use ngrok in order to make this even better. So, I'm going to say npm run ngrok setup. That's going to install everything. It's going to ask for my authtoken. And the authtoken is how I get features such as a stable remote URL, so I don't have to remember the random one that ngrok is going to create for me. Obviously, I'm using a password manager. Please use a password manager. Please don't ever use just normal passwords.
There's just too many breaches and things like that. You need a way to create super robust passwords that you'd never be able to remember. Okay. So, ngrok has created a forwarding interface from this to local host in the container. So, if I click this, we'll see it's much faster. So, this is really the speed and workflow that you would expect. This isn't bad and you don't necessarily need ngrok. But for me, I don't think this performance is ideal. And it's simply a limitation of just using remote SSH and all of that sort of tunneling as well. So, this is for sure what I would go with.
Okay. If things get weird, there's a couple of things you can do. You can tell Visual Studio Code that you simply want to rebuild the container. You can also ask Visual Studio Code to reload the window. And then if you get into some sort of like weird state where Visual Studio Code can't connect to like the container, even though you can verify the container is running, you can just disconnect, go back to your local view, and then you can go into using... Where's it going? Oh, once you reload Visual Studio Code locally, and you get access to Docker, then you can just manually delete the container that Visual Studio Code has created for you. You can delete the image that it's created for you. So, you can just get back to a clean slate. So, there's a few different things that can go wrong, but, hopefully after this guide it'll be reasonably smooth sailing for you.
Okay. So, that's everything that I really wanted to cover in this webinar. We went a little bit over time, but not too bad. So, hopefully, by the end of this obviously, there's a lot to get your head around and a lot to set up, but I wanted to demystify the magic of dev containers. There's a lot of technology the Visual Studio Code is taken care of for you, but essentially the VS Code is a client. The Docker CLI is a local client as well. We're using open SSH to communicate with a remote host, in this case, Ubuntu VM on AWS. And that is what is allowing Visual Studio Code to control containers on that remote host. We've gone through some of the challenges and benefits of remote development in containers.
The benefits are huge, but they're not without a bit of a learning curve and some configuration issues. So, whether this is the right thing for your team, it's up for debate. But I think it's worth experimenting with this, for sure. Taking you through how to set up a VM in AWS and I'd recommend Lightsail just because it's built for this kind of thing. You just want a simple way to create a VM. We've learnt how to configure Docker locally using the open SSH configuration. And we've learnt with Docker contexts, how we can switch between the AWS version and our local Docker version. You've got a basic understanding now of the dev container syntax at least in terms of the most important things that you're going to definitely need in terms of getting things working. And then we've also covered knowledge of source code mounting options.
Now, as a quick side note, I just wanted to touch on why using dotenv files can just be really problematic in this sort of scenario. So, I'm going to stop our debug server here. What I'm going to do is I've got a sample dotenv file. I'm simply going to copy this and paste it. Okay. So, we've got a dotenv file. And now when I run our application, instead of using the Doppler CLI, I'm just going to run it as normal. So, we have all of our app configuration in dotenv file. Things seem pretty good, pretty straightforward. Let's bring up the debug console. Okay. Now, for those of you that have used Docker and have had issues with being able to reach into the application inside of Docker from the outside world and have had troubles, you'll know exactly what the problem is.
If I now go back to here, it's not working. But we can see that the code is up, what's going on? Well, it turns out that the container itself has a host name, environment variable. And so, if we're using the dotenv [inaudible 00:50:28] which we are, let's see if there's a way that we can... All right, environment variables, so it makes the dotenv file the source of truth. Okay. So, it seems like this is something that's already been asked.
Okay. So, essentially this is not something they're planning on fixing at all. So, that's really challenging. I think on the homepage, they actually do have a section for how to override. There we go. This is really odd and we don't want to have to go through all of this issue. So, you can work around this. You can go into your launch configuration and you could always specify that for here. We also want to find env. We set this to HOSTNAME. We set this to 0.0.0.0, and now if we run it with node... Oops, What I probably wanted to do is just reload that. Cool. Let's run that again.
And, hopefully, we should see... Okay, now we're seeing HOSTNAME. And if we get back here, I know I've got things to work. Now, this is just... You might think, "Well, that's not that big a deal," but what we and our customers have noticed, it's that it's little paper cuts like this, that just add up to a lot of pain over time. You really shouldn't have to hack your launch configuration just to work around an issue with dotenv. So, that's why using a secret manager that has the source of truth to your secrets remotely is just going to be the way to go, regardless of whether it's Doppler or a different secrets manager, you really should look at moving away from dotenv files, not just for the productivity benefits, but for the security benefits as well.
All right. Well, sorry, I did go 10 minutes over time. I am going to send out a link after this session with some recommended reading, some troubleshooting tips, things to consider such as Git credentials that we didn't have time to go into today. And you should now have all of the knowledge to at least confidently start in bringing dev containers to your own remote environments and your applications. So, I hope you enjoyed that, and all of that made a lot of sense. It is a lot to get your head around, but I think Remote dev containers are the future. That's what we've always wanted. And once GitHub Codespaces come out of beta, this will be exciting because then everyone to a certain extent will get the benefits of Remote dev containers for things such as easily previewing a pull request in a container, in a live environment.
Trusted by the world’s best DevOps and security teams. Doppler is the secrets manager developers love.