Windows server containers. How to package an application in a Docker container? Stop running containers

Studying container technology
Windows Server 2016

One of the notable new features introduced in Windows Server 2016 is support for containers. Let's get to know her better

Modern systems have long moved away from the principle of one OS - one server. Virtualization technologies make it possible to use server resources more efficiently, allowing you to run several operating systems, dividing them among themselves and simplifying administration. Then microservices appeared, allowing isolated applications to be deployed as a separate, easily managed and scalable component. Docker changed everything. The process of delivering an application together with the environment has become so simple that it could not help but interest the end user. The application inside the container works as if it were using a full-fledged OS. But unlike virtual machines, they do not load their own copies of the OS, libraries, system files, etc. Containers receive an isolated namespace in which the application has access to all the necessary resources, but cannot go beyond them. If you need to change settings, only differences with the main OS are saved. Therefore, the container, unlike virtual machines, starts up very quickly and puts less load on the system. Containers use server resources more efficiently.

Containers on Windows

In Windows Server 2016, in addition to existing virtualization technologies - Hyper-V and Server App-V virtual applications, support for Windows Server Containers containers has been added, implemented through the Container Management stack abstraction layer that implements all the necessary functions. The technology was announced back in Technical Preview 4, but since then a lot has changed in the direction of simplification and the instructions written before can not even be read. At the same time, two types of “their” containers were proposed - Windows containers and Hyper-V containers. And probably another main opportunity is to use Docker tools in addition to PowerShell cmdlets to manage containers.

Windows containers are similar in principle to FreeBSD Jail or Linux OpenVZ; they use one core with the OS, which, along with other resources (RAM, network), is shared among themselves. OS and service files are projected into the namespace of each container. This type of container uses resources efficiently, reducing overhead, and therefore allows applications to be placed more densely. Since the base container images “have” the same kernel as the node, their versions must match, otherwise operation is not guaranteed.

Hyper-V containers use an additional level of isolation and each container is allocated its own core and memory. Isolation, unlike the previous type, is carried out not by the OS kernel, but by the Hyper-V hypervisor (the Hyper-V role is required). The result is lower overhead than virtual machines, but greater isolation than Windows containers. In this case, to run the container, have the same OS kernel. These containers can also be deployed in Windows 10 Pro/Enterprise. It is especially worth noting that the container type is not selected during creation, but during deployment. That is, any container can be launched both as Windows and as the Hyper-V version.

The container uses trimmed Server Core or Nano Server as the OS. The first one appeared in Windows Sever 2008 and provides greater compatibility with existing applications. The second is even more stripped down compared to Server Core and is designed to work without a monitor, allowing you to run the server in the minimum possible configuration for use with Hyper-V, file server(SOFS) and cloud services, requiring 93% less space. Contains only the most necessary components (.Net with CoreCLR, Hyper-V, Clustering, etc.).

The VHDX hard disk image format is used for storage. Containers, as in the case of Docker, are saved into images in the repository. Moreover, each one does not save the complete set of data, but only the differences between the created image and the base one. And at the moment of startup, all the necessary data is projected into memory. Virtual Switch is used to manage network traffic between the container and the physical network.

Containers in Microsoft Windows Server 2016 are an extension of the technology's capabilities for customers. Microsoft plans customer development, deployment and now hosting of applications in containers as part of their development processes.

As the pace of application deployment continues to accelerate and customers use application version deployments on a daily or even hourly basis, the ability to quickly deploy applications validated from the developer's keyboard to production is critical to business success. This process is accelerated by containers.

While virtual machines have the function of migrating applications in data centers and to the cloud and beyond, virtualization resources are further unlocked by containers using OS virtualization (System Software). This solution, thanks to virtualization, will allow for fast delivery of applications.

Windows Container technology includes two different types of containers, Windows Server Container and Hyper-V Containers. Both types of containers are created, managed, and function identically. They even produce and consume the same container image. They differ from each other in the level of isolation created between the container, the host operating system and all other containers running on the host.

Windows Server Containers: Multiple container instances can run simultaneously on a host with isolation provided through namespace, resource management, and process isolation technologies. Windows Server Containers have the same core located on the host.

Hyper-V Containers: Multiple container instances can run simultaneously on a host. However, each container is implemented inside a dedicated virtual machine. This provides kernel-level isolation between each Hyper-V container and the host container.

Microsoft has included in the container feature a set of Docker tools for managing not only Linux containers, but also Windows Server and Hyper-V containers. As part of collaboration in the Linux and Windows communities, the Docker experience has been expanded by creating the PowerShell module for Docker, which is now open source for. The PowerShell module can manage Linux and Windows Sever containers locally or remotely using Docker REST API technology. Developers are satisfied with innovating for customers using open source code for the development of our platform. In the future we plan to bring technologies to our customers along with innovations like Hyper-V.

Buy Windows Server 2016

We offer you to buy Windows Server 2016 at a discount from the official Microsoft Partner in Russia - DATASYSTEMS Company. You will have the opportunity to get advice, as well as download Windows Server 2016 for free for testing by contacting our technical support specialists. Windows Server 2016 price on request. You can receive a commercial offer for participation in the purchase of Windows Server 2016 upon request by e-mail:

If you are interested in modern trends in the IT world, then you have probably heard about Docker. In short: this technology allows you to run containers with installed applications in your own sandbox (no, this is not virtualization). You can read more details, for example, on Habré. That is, we can quickly assemble and launch a container with the required version of 1C server. Docker is widely used in Linux and you can even find ready-made containers at docker.hub, but 1c mostly lives in Windows.

What is it for?

Quick and easy to deploy. We can prepare a working environment with two teams. Our prepared environment is always in the expected state. There is no dancing with a tambourine during installation.

Installing several versions of 1C server and launching the desired one.

A lot of junk is not installed on the server

In this article I will show you how to assemble a container with a 1C server yourself.

OS requirements :

The Windows Container feature is only available on Windows Server build 1709, Windows Server 2016, Windows 10 Professional, and Windows 10 Enterprise (Anniversary Edition)

Hardware requirements :

The processor must support virtualization

Installing Docker

Windows Server 2016

Open powershell as administrator and run the following commands:

Install-Module DockerMsftProvider -Force Install-Package Docker -ProviderName DockerMsftProvider -Force (Install-WindowsFeature Containers).RestartNeeded

If "yes" appears on the screen after the last command, you need to restart the computer.

Windows 10

It's a little easier here. Download the installer from the official website download.docker.com and launch. When installing, check the box next to windows containers

Launch

To launch our environment, we need to launch 2 containers: a database and a 1C server. Of course, you can use your existing server.

Database

We will run it on MSSQL. Microsoft has already prepared the necessary container with detailed description. Link to docker.hub

We install it with a command in powershell as administrator. The line needs to be replaced to our password.

-e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

Let's look at this command:

docker run - Runs a container in local storage. If it is not there, download it from the repository.

D - the container runs in the background. Otherwise you will be taken to the container's powerchell console

P - Forwards a port from the container to the local machine.

E - Variables that are passed to the container

into a variable -e sa_password= you need to set your SA user password.

To connect existing databases, we will supplement our team.

We need to forward the folder with our databases to the container

V DirectoryOnHost:DirectoryInContainer

Databases are connected via the attach_dbs variable

E attach_dbs="[("dbName":"Test","dbFiles":["C:\\db\\test.mdf","C:\\db\\test_log.ldf"]),("dbName ":"HomeBuh","dbFiles":["C:\\db\\HomeBuh.mdf","C:\\db\\HomeBuh_log.ldf"])]"

docker run -d -p 1433:1433 -e sa_password= -e ACCEPT_EULA=Y -v C:/temp/:C:/temp/ -e attach_dbs="[("dbName":"SampleDb","dbFiles":["C:\\temp\\sampledb.mdf" ,"C:\\temp\\sampledb_log. ldf"])]" microsoft/mssql-server-windows-developer

Server 1C

Attention! This image is for testing purposes only.

To ensure that information about our clusters is saved on the local computer and can be connected to another container, let’s create a folder c:\srvinfo

Let's run the powershell command

Docker run -d -p 1541:1541 -p 1540:1540 -p 1560-1591:1560-1591 -v C:/srvinfo:C:/srvinfo lishniy/1c-windows

All is ready. This is where a surprise awaited me. I've been using mssql in a container on a test machine for a long time and always accessed it via localhost. Now it was either broken, or the stars aligned, but it stopped working. and you can read why. So while this is being fixed, we either forward the container to our network (when starting the container, we specify --network host in the place of a bunch of ports), or we determine the IPs issued within the network and connect to them. To do this, you need to run two simple commands. In the example I will show along with the output

PS C:\WINDOWS\system32> docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bd5d26e9297 lishniy/1c-windows "powershell -Command..." 12 minutes ago Up 10 minutes 0.0.0.0:1540-1541->1540-1541/ tcp, 0.0.0.0:1560-1591->1560-1591/tcp gallant_perlman 696eb9b29a02 microsoft/mssql-server-windows-developer "powershell -Command..." 38 minutes ago Up 37 minutes (healthy) 0.0.0.0:1433->1433 /tcp youthful_wing PS C:\WINDOWS\system32> docker inspect -f "((range .NetworkSettings.Networks))((.IPAddress))((end))" 696eb9b29a02 172.17.84.179 PS C:\WINDOWS\system32> docker inspect -f "((range .NetworkSettings.Networks))((.IPAddress))((end))" 7bd5d26e9297 172.17.92.255

The first command displays a list of containers, the second gets the IP address of the container by its id.

So we have the addresses. Now open the administration console and add our database as usual.

Stop running containers

When executing the command

Docker run...

We always create a new, clean container without data. In order to access the list of already created containers, just run the command

Docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7bd5d26e9297 lishniy/1c-windows "powershell -Command..." 2 days ago Exited (1073807364) 43 hours ago gallant_perlman 696eb9b29a02 microsoft/mssql-server-windows-developer "powershell -Command …” 2 days ago Exited (1073807364) 4 minutes ago youthful_wing

In the future, you can start/stop ready-made containers

Docker container start Container_ID docker container stop Container_ID docker container restart Container_ID

There is also a GUI application for control. For example kitematic

Building a Docker container

Using ready-made containers is simple and convenient; in the case of a database, we can go to GitHub and see how it was assembled. Although for containers without a dockerfile in the description, we cannot know for sure what is inside.

So, that's the minimum we need

  1. 1C installer.
  2. dockerfile
  3. Powershell script to start the 1c service. I used from Microsoft repository
  4. Powershell script for installation and configuration. I called it prepare.ps1

Everything is clear with the first two. Let's move on to building the dockerfile.

dockerfile

This file is a file with steps for building our container.

First, let's just try to build and run our container. To do this, we collect all our files in one directory. We create a dockerfile there with the following contents

FROM microsoft/windowsservercore SHELL ["powershell", "-Command", "$ErrorActionPreference = "Stop"; $ProgressPreference = "SilentlyContinue";"] WORKDIR / COPY prepare.ps1 Wait-Service.ps1 1cEnt.zip sqlncli.msi . /RUN.\prepare.ps1; powershell.exe -Command Remove-Item prepare.ps1 -Force CMD .\Wait-Service.ps1 -ServiceName "1C:Enterprise 8.3 Server Agent" -AllowServiceRestart

Let's analyze it in detail

FROM microsoft/windowsservercore

We indicate the container that we take as a basis. This windows server core. By default, the image with the latest tag is taken. You can try the latest version, it takes up much less space. I used this one, since the mssql container is built on it, and in this case this piece did not download again.

SHELL ["powershell", "-Command", "$ErrorActionPreference = "Stop"; $ProgressPreference = "SilentlyContinue";"]

Specify powershell as the command line instead of cmd

WORKDIR / - Specifies the working directory
COPY - copy files for installation
RUN - run the installation script
CMD -command that will be launched after the container starts

Create a file prepare.ps1. We install 1C in it and configure the service.

Msiexec /i "1CEnterprise 8.2.msi" /qr TRANSFORMS=adminstallrelogon.mst;1049.mst DESIGNERALLCLIENTS=0 THICKCLIENT=0 THINCLIENTFILE=0 THINCLIENT=1 WEBSERVEREXT=0 SERVER=1 CONFREPOSSERVER=0 CONVERTER77=0 SERVERCLIENT=0 S=RU Remove-Item c:\sqlncli.msi -Force sc.exe config "1C:Enterprise 8.3 Server Agent" depend= "/"

Pay attention to the last line. The Server service's dependencies include the Server service, which does not run in containers. I don’t know why it was added, but the 1C server works fine without it. Therefore, we’ll simply remove it from the dependencies so that our service loads correctly.

Now in the powershell window go to the folder with the files and enter

Dockerbuild.

After the construction is completed, run it (in your case, the first two columns will be empty).

Docker images REPOSITORY TAG IMAGE ID CREATED SIZE lishniy/1c-windows latest dab800c94b09 3 days ago 11.6GB docker run -d -p 1541:1541 -p 1540:1540 -p 1560-1591:1560-1591 dab800c94b09

After these operations, our container will work. But there are small nuances. We can neither enable logging, nor use debugging on the server, nor change ports. Therefore, let’s slightly modify our dockerfile

FROM microsoft/windowsservercore ENV regport=1541 \ port=1540 \ range="1560:1591" \ debug="N" \ log="N" SHELL ["powershell", "-Command", "$ErrorActionPreference = "Stop" ; $ProgressPreference = "SilentlyContinue";"] WORKDIR / COPY logcfg.xml start.ps1 prepare.ps1 Wait-Service.ps1 1cEnt.exe sqlncli.msi ./ RUN .\prepare.ps1; powershell.exe -Command Remove-Item prepare.ps1 -Force CMD .\start.ps1 -regport $env:regport -port $env:port -range $env:range -debug $env:debug -servpath "C:\srvinfo " -log $env:log -Verbose

ENV regport=1541 \ port=1540 \ range="1560:1591" \ debug="N" \ log="N"

Now a script is used as a launch point, in which we can set ports, enable debugging and logging, and specify the path for storing information about clusters

You can write your own script, or use a ready-made one in the application.

How to package an application in a Docker container?

I have an application written in NodeJS. How can I package it into a Docker image to run as a container?

Docker is a container management system for POSIX-compliant operating systems (currently supported by Linux). A special feature of Docker is the ability to package an application with all the necessary environment in such a way that it can be run on another system without long and complex procedures for installing dependencies or building from sources. A packaged application ready for deployment is called an "image". Docker images are based on "templates" - pre-configured working environments. You can think of these as operating system distributions, although this is not entirely true. You can also create your own template by reviewing the Docker documentation. The advantage of this approach is that the image of your application will contain only the application itself, and the environment required for it will be downloaded automatically from the template repository. Docker is slightly reminiscent of chroot or bsd jail, but works differently.

It is important to distinguish between the concepts of “container” and “image”. The container is a running copy of your application, and the image is the file in which the application is stored, and from which the container is created.

Let's say you have a NodeJS application that you want to containerize. Let's assume that the file that runs your application is called server.js, and the application listens on port 8000 to work. We will use "node:carbon" as a template. To containerize your application, you need to create a “Dockerfile” file in the directory where your application files are located, which will describe the image preparation parameters:

$ touch Dockerfile

The contents of the file might be something like this:

# Specify the template to use FROM node:carbon # Create the application's working directory inside the container WORKDIR /usr/src/app # Install application dependencies using npm # Both package.json and package-lock.json files are copied, if present COPY package*. json ./ RUN npm install # Copy your application files to the image COPY . . # Open port 8000 so that it is accessible from outside the EXPOSE 8000 container # Execute the command to run the application inside the container CMD [ "npm", "start" ]

To exclude unnecessary files from the image, you can list their names in a ".dockerignore" file. You can use a mask (*.log).

The image is built with the following command:

$ docker build -t username/node-web-app .

$ docker images # Example REPOSITORY TAG ID CREATED node carbon 1934b0b038d1 5 days ago username/node-web-app latest d64d3505b0d2 1 minute ago

The container is launched from the image using the following command:

$ docker run -p 49160:8000 -d username/node-web-app

This example creates a container from the image "username/node-web-app" and runs it immediately. Application port 8000 is available on the local machine (localhost) and in order for it to be accessible “outside”, it is “forwarded” to port 49160. You can select any free port, in addition, it is possible to forward the application port “as is” by specifying the option " -p 8000:8000".

You can see that your container is running by entering the command:

$ docker ps # Example ID IMAGE COMMAND ... PORTS ecce33b30ebf username/node-web-app:latest npm start ... 49160->8000

A container can be managed using various commands by specifying the ID of this container:

$ docker pause ecce33b30ebf - pause the container with ID ecce33b30ebf
$ docker resume ecce33b30ebf - resume the container with ID ecce33b30ebf
$ docker stop ecce33b30ebf - stop container with ID ecce33b30ebf
$ docker rm ecce33b30ebf - delete the container (this deletes all data created by the application inside the container)

*nix systems initially implement multitasking and offer tools that allow you to isolate and control processes. Technologies such as chroot(), which provides isolation at the file system level, FreeBSD Jail, which restricts access to kernel structures, LXC and OpenVZ, have long been known and widely used. But the impetus for the development of technology was Docker, which made it possible to conveniently distribute applications. Now the same thing has come to Windows.

Containers on Windows

Modern servers have excess capacity, and applications sometimes do not even use parts of them. As a result, the systems “stand idle” for some time, heating the air. The solution was virtualization, which allows you to run several operating systems on one server, guaranteed to separate them among themselves and allocate the required amount of resources to each. But progress does not stand still. The next stage is microservices, when each part of the application is deployed separately, as a self-sufficient component that can easily be scaled to the required load and updated. Isolation prevents other applications from interfering with the microservice. With the advent of the Docker project, which simplified the process of packaging and delivering applications along with the environment, microservices architecture received an additional impetus in development.

Containers are another type of virtualization that provide a separate environment for running applications, called OS Virtualization. Containers are implemented through the use of an isolated namespace, which includes all the resources necessary for operation (virtualized names), with which you can interact (files, network ports, processes, etc.) and which you cannot leave. That is, the OS shows the container only what is allocated. The application inside the container believes that it is the only one and runs in a full-fledged OS without any restrictions. If it is necessary to change an existing file or create a new one, the container receives copies from the main host OS, saving only the changed sections. Therefore, deploying multiple containers on a single host is very efficient.

The difference between containers and virtual machines is that containers do not load their own copies of the OS, libraries, system files, etc. operating system as if shared with the container. The only additional thing required is the resources required to run the application in the container. As a result, the container starts in a matter of seconds and loads the system less than when using virtual machines. Docker currently offers 180 thousand applications in the repository, and the format is unified by the Open Container Initiative (OCI). But dependence on the kernel means that containers will not work on another OS. Linux containers require the Linux API, so Windows will not work on Linux.

Until recently, Windows developers offered two virtualization technologies: virtual machines and Server App-V virtual applications. Each has its own niche of application, its pros and cons. Now the range has become wider - containers have been announced in Windows Server 2016. And although at the time of TP4 the development was not yet completed, it is already quite possible to look new technology in action and draw conclusions. It should be noted that, catching up and having ready-made technologies on hand, MS developers went a little further in some issues, so that the use of containers became easier and more universal. The main difference is that there are two types of containers offered: Windows containers and Hyper-V containers. In TP3 only the first ones were available.

Windows containers use one kernel with the OS, which is dynamically shared among themselves. The distribution process (CPU, RAM, network) is taken over by the OS. If necessary, you can limit the maximum available resources allocated to the container. OS files and running services are mapped to each container's namespace. This type of container uses resources efficiently, reducing overhead, and therefore allows applications to be placed more densely. This mode is somewhat reminiscent of FreeBSD Jail or Linux OpenVZ.

Hyper-V containers provide an additional level of isolation using Hyper-V. Each container is allocated its own kernel and memory; isolation is carried out not by the OS kernel, but by the Hyper-V hypervisor. The result is the same level of isolation as virtual machines, with less overhead than VMs, but more overhead than Windows containers. To use this type of container, you need to install the Hyper-V role on the host. Windows containers are more suitable for use in a trusted environment, such as when running applications from the same organization on a server. When a server is used by multiple companies and a greater level of isolation is needed, Hyper-V containers are likely to make more sense.

An important feature of containers in Win 2016 is that the type is selected not at the time of creation, but at the time of deployment. That is, any container can be launched both as Windows and as Hyper-V.

In Win 2016, the Container Management stack abstraction layer, which implements all the necessary functions, is responsible for containers. The VHDX hard disk image format is used for storage. Containers, as in the case of Docker, are saved into images in the repository. Moreover, each does not save a complete set of data, but only the differences between the created image and the base one, and at the time of launch, all the necessary data is projected into memory. A Virtual Switch is used to manage network traffic between the container and the physical network.

Server Core or Nano Server can be used as the OS in the container. The first, in general, is not new for a long time and provides high level compatibility with existing applications. The second is an even more stripped-down version for working without a monitor, allowing you to run the server in the minimum possible configuration for use with Hyper-V, file server (SOFS) and cloud services. Of course, there is no graphical interface. Contains only the most necessary components (.NET with CoreCLR, Hyper-V, Clustering, and so on). But in the end it takes up 93% less space and requires fewer critical fixes.

More interesting point. To manage containers, in addition to traditional PowerShell, you can also use Docker. And to provide the ability to run non-native utilities on Win, MS has partnered to extend the Docker API and toolkit. All developments are open and available on the official GitHub of the Docker project. Docker management commands apply to all containers, both Win and Linux. Although, of course, it is impossible to run a container created on Linux on Windows (as well as vice versa). Currently, PowerShell is limited in functionality and only allows you to work with a local repository.

Installation Containers

Azure has the required Windows Server 2016 Core with Containers Tech Preview 4 image that you can deploy and use to explore containers. Otherwise, you need to configure everything yourself. For local installation you need Win 2016, and since Hyper-V in Win 2016 supports nested virtualization, it can be either a physical or virtual server. The component installation process itself is standard. Select the appropriate item in the Add Roles and Features Wizard or, using PowerShell, issue the command

PS> Install-WindowsFeature Containers

During the process, the Virtual Switch network controller will also be installed; it must be configured immediately, otherwise further actions will generate an error. Let's look at the names of network adapters:

PS>Get-NetAdapter

To work, we need a controller with the External type. The New-VMSwitch cmdlet has many parameters, but for the sake of this example we’ll make do with the minimal settings:

PS> New-VMSwitch -Name External -NetAdapterName Ethernet0

We check:

PS> Get-VMSwitch | where ($_.SwitchType –eq "External")

The Windows firewall will block connections to the container. Therefore, it is necessary to create an allowing rule, at least to be able to connect remotely using PowerShell remoting; for this we will allow TCP/80 and create a NAT rule:

PS> New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True PS> Add-NetNatStaticMapping -NatName "ContainerNat" -Protocol TCP -ExternalIPAddress 0.0.0.0 - InternalIPAddress 192.168.1.2 -InternalPort 80 -ExternalPort 80

There is another option for simple deployment. The developers have prepared a script that allows you to install all dependencies automatically and configure the host. You can use it if you wish. The parameters inside the script will help you understand all the mechanisms:

PS> https://aka.ms/tp4/Install-ContainerHost -OutFile C:\Install-ContainerHost.ps1 PS> C:\Install-ContainerHost.ps1

There is another option - to deploy a ready-made virtual machine with container support. To do this, there is a script on the same resource that automatically performs all the necessary operations. detailed instructions listed on MSDN. Download and run the script:

PS> wget -uri https://aka.ms/tp4/New-ContainerHost -OutFile c:\New-ContainerHost.ps1 PS> C:\New-ContainerHost.ps1 –VmName WinContainer -WindowsImage ServerDatacenterCore

We set the name arbitrarily, and -WindowsImage indicates the type of image being collected. Options could be NanoServer, ServerDatacenter. Docker is also installed immediately; the SkipDocker and IncludeDocker parameters are responsible for its absence or presence. After launch, the download and conversion of the image will begin, during the process you will need to specify a password to log into the VM. The ISO file itself is quite large, almost 5 GB. If the channel is slow, the file can be downloaded on another computer, then renamed to WindowsServerTP4 and copied to C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks. We can log in to the installed virtual machine, specifying the password specified during assembly, and work.

Now you can move directly to using containers.

Using containers with PowerShell

The Containers module contains 32 PowerShell cmdlets, some of which are still incomplete, although generally sufficient to get everything working. It's easy to list:

PS> Get-Command -module Containers

You can get a list of available images using the Get-ContainerImage cmdlet, containers - Get-Container. In the case of a container, the Status column will show its current status: stopped or running. But while the technology is under development, MS has not provided a repository, and, as mentioned, PowerShell currently works with a local repository, so for experiments you will have to create it yourself.

So, we have a server with support, now we need the containers themselves. To do this, install the package provider ContainerProvider.

Continuation is available only to members

Option 1. Join the “site” community to read all materials on the site

Membership in the community within the specified period will give you access to ALL Hacker materials, increase your personal cumulative discount and allow you to accumulate a professional Xakep Score rating!

If you find an error, please select a piece of text and press Ctrl+Enter.