The 1c server is running slowly. File database bottlenecks - how to avoid (from recent experience)

Recently, users and administrators are increasingly beginning to complain that new 1C configurations developed on the basis of a managed application work slowly, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more resource-demanding, but most users do not understand what primarily affects the operation of 1C in file mode. Let's try to correct this gap.

In ours, we have already touched on the impact of disk subsystem performance on the speed of 1C, but this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently avoided; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments are “iron”: “Accounting 2.0 just flew, and the “troika” barely moved. Of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, allocating them with 2 cores of the host Core i5-4670 and 2 GB random access memory, which corresponds to approximately the average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases for any DBMS administrator. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens when you launch a 1C file database over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take a significant amount of time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the most great importance each measurement:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for the new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be sufficient for normal operation 8-10 network clients in heavy conditions.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the partition with information bases on server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of operation of locally installed 1C in, much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical acceleration in everyday work, since, for example, loading will be limited throughput networks.

Slow HDD can slow down some operations, but by itself cannot cause the program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Dumped into disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk memory.

Where it leads? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. IN real life on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease in productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible advantages. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations hard drives can be assessed using the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance unless this PC is used for heavy operations: group processing, heavy reports, month-end closing, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

  • Tags:

Please enable JavaScript to view the

1C: Accounting is one of the most famous and most convenient programs accounting. Proof of this is its widespread distribution in all areas of activity: trade, production, finance, etc.

Unfortunately, like all computer programs, 1C: Accounting also experiences various crashes and freezes. One of the most common problems is slow system operation.

In order to understand the reasons for its occurrence and try to solve them, today’s article was written.

Eliminating common causes of slow 1C operation

1. The most common reason for the slow operation of the program is a long time to gain access to the base 1C file, which is possible due to errors on the hard drive or due to poor quality of the Internet connection, in the case of using cloud technologies. There may also be problems with the antivirus system settings.

Solution: perform a scan to eliminate errors and defragment the hard drive. Test Internet access speed. If the readings are low (less than 1 Mb/s), contact the provider's TP service. Temporarily disable anti-virus protection and firewall in the anti-virus system.

2. Perhaps the slow operation of the program is due to the large size of the database file.

To solve this problem open 1C in the “Configurator” mode, select “Administration” in the system menu, then “Testing and correction”. In the window, the “Compression of information database tables” item must be selected; the “Testing and correction” item below is active. Click "Run" and wait for the process to complete.

3. Next possible reason- outdated software or outdated version of the program itself.

Way out of this situation: update software operating system or install the latest version of the 1C program. For preventive purposes, always update to the latest version, which eliminates errors from earlier configurations.

To install the latest version of the 1C system, you need to enter the program in the “Configuration” mode, then from the menu go to “Service” -> “Service” -> “Configuration Update”, then select the default settings and click the “Update” button.

Photo by Alena Tulyakova, news agency “Clerk.Ru”

The article identifies the main mistakes that novice 1C administrators make and shows how to solve them using the Gilev test as an example.

The main purpose of writing this article is to avoid repeating obvious nuances for those administrators (and programmers) who have not yet gained experience with 1C.

The secondary goal is that if I have any shortcomings, Infostart will be the quickest to point this out to me.

V. Gilev’s test has already become a kind of “de facto” standard. The author on his website gave quite clear recommendations, but I will simply present some results and comment on the most likely errors. Naturally, the test results on your equipment may differ; this is just a guide for what should be and what you can strive for. I would like to note right away that changes must be made step by step, and after each step, check what result it gave.

There are similar articles on Infostart, I will put links to them in the relevant sections (if I miss something, please suggest me in the comments, I will add it). So, let’s assume your 1C is slow. How to diagnose the problem, and how to understand who is to blame, the administrator or the programmer?

Initial data:

Tested computer, main guinea pig: HP DL180G6, equipped with 2*Xeon 5650, 32 Gb, Intel 362i, Win 2008 r2. For comparison, the Core i3-2100 shows comparable results in the single-threaded test. The equipment I deliberately chose was not the newest; with modern equipment the results are noticeably better.

For testing separate 1C and SQL servers, SQL server: IBM System 3650 x4, 2*Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test a 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the database is on the server in a shared folder, clients connect via the network, CIFS/SMB protocol). Algorithm step by step:

0. Add Gilev’s test database to the file server in the same folder as the main databases. We connect from the client computer and run the test. We remember the result.

It is understood that even for old computers from 10 years ago (Pentium on 775 socket), the time from clicking on the 1C:Enterprise shortcut to the appearance of the database window should take less than a minute. (Celeron = slow).

If your computer is worse than a Pentium on 775 socket with 1 GB of RAM, then I sympathize with you, and it will be difficult for you to achieve comfortable work on 1C 8.2 in the file version. Think about either upgrading (it's high time) or switching to a terminal (or web, in the case of thin clients and managed forms) server.

If the computer is no worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus and HASP protection driver.

If Gilev’s test at this stage showed 30 “parrots” or higher, but the 1C working base still works slowly, the questions should be directed to the programmer.

1. As a guide to how much a client computer can “squeeze”, we check the operation of only this computer, without a network. We install the test database on a local computer (on a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. For now, the simplest and free one is Ramdisk enterprise.

To test version 8.2, a 256 MB ramdisk is enough, and! The most important. After rebooting the computer, with the ramdisk running, there should be 100-200 MB free on it. Accordingly, without a ramdisk, for normal operation there should be 300-400 MB of free memory.

To test version 8.3, a 256 MB ramdisk is enough, but you need more free RAM.

When testing, you need to look at the processor load. In a case close to ideal (ramdisk), local file 1c loads 1 processor core when running. Accordingly, if during testing your processor core is not fully loaded, look for weak points. A little emotional, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3s with high frequencies, the numbers 70-80 are quite realistic.

The most common errors at this stage.

  • Incorrectly configured antivirus. There are many antiviruses, the settings for each are different, I will only say that with proper configuration, neither the web nor Kaspersky 1C interferes. With the default settings, approximately 3-5 parrots (10-15%) can be taken away.
  • Performance mode. For some reason, few people pay attention to this, but the effect is the most significant. If you need speed, then you must do this, both on client and server computers. ( Good description at Gilev's. The only caveat is that on some motherboards If you turn off Intel SpeedStep, you cannot turn on TurboBoost).
In short, while 1C is running, there is a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is enabled, the processor lowers its frequency. A response comes from the device, 1C (the processor) needs to work, but the first clock cycles are at a reduced frequency, then the frequency increases - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can (and preferably) enable performance mode in two places:

  • via BIOS. Disable modes C1, C1E, Intel C-state (C2, C3, C4). In different bios they are called differently, but the meaning is the same. It takes a long time to search, a reboot is required, but if you do it once, then you can forget it. If you do everything correctly in the BIOS, the speed will increase. On some motherboards, you can configure the BIOS settings so that Windows performance mode will not play a role. (Examples of BIOS settings from Gilev). These settings mainly concern server processors or “advanced” BIOSes, if you haven’t found this and you DO NOT have Xeon, that’s okay.

  • Control panel - Power supply - High performance. Minus - if the computer has not been serviced for a long time, it will make a louder fan noise, heat up more and consume more energy. This is a performance fee.
How to check that the mode is enabled. Launch the task manager - performance - resource monitor - CPU. We wait until the processor is busy with nothing.
These are the default settings.

BIOS C-state enabled,

balanced power consumption mode


BIOS C-state enabled, high performance mode

For Pentium and Core you can stop there,

You can still squeeze a little "parrots" out of Xeon


In BIOS C-state is disabled, high performance mode.

If you don't use Turbo boost, this is what it should look like

server tuned for performance


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the last - 49.5. The difference is almost twofold. The numbers may vary, but the ratio remains essentially the same for Intel Core.

Dear administrators, you can criticize 1C as much as you like, but if end users need speed, you need to enable high performance mode.

c) Turbo Boost. First you need to understand whether your processor supports this function, for example. If it supports, then you can still quite legally get some performance. (I don’t want to touch on the issues of frequency overclocking, especially servers, do it at your own peril and risk. But I agree that increasing Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to turn on turbo boost is written, for example, . But! For 1C there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost occurs when C-state is turned on. And we get something like this:

Please note that the multiplier is the maximum, the Core speed is beautiful, and the performance is high. But what will happen as a result with 1s?

But in the end it turns out that according to CPU performance tests the version with a multiplier of 23 is ahead, according to Gilev’s tests in the file version the performance with a multiplier of 22 and 23 is the same, but in the client-server version - the version with a multiplier of 23 is terrible terrible terrible (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, the recommendation is to check both options for yourself and choose the best one. In any case, the difference between 49.5 and 53 parrots is quite significant, especially without much effort.

Conclusion - turbo boost must be turned on. Let me remind you that it is not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control Panel - Power Options - High Performance) . And I would still (even for the file version) choose the option where c-state is turned off, even though the multiplier is smaller. It will turn out something like this...

A rather controversial point is the memory frequency. For example, memory frequency is shown to have a very strong influence. My tests did not reveal such a dependence. I will not compare DDR 2/3/4, I will show the results of changing the frequency within the same line. The memory is the same, but in the BIOS we are forced to set lower frequencies.




And test results. 1C 8.2.19.83, for the file version local ramdisk, for client-server 1C and SQL on one computer, Shared memory. Turbo boost is disabled in both versions. 8.3 shows comparable results.

The difference is within the measurement error. I specifically pulled out screenshots of CPU-Z to show that with a change in frequency, other parameters also change, the same CAS Latency and RAS to CAS Delay, which neutralizes the change in frequency. The difference will be when the memory modules are physically changed, from slower to faster, but even there the numbers are not particularly significant.

2. When we have sorted out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about network tuning, there are articles on Infostart (, and others), but here I will not focus on this topic. Before starting testing 1C, please make sure that iperf between two computers shows the entire bandwidth (for 1 Gbit cards - well, at least 850 Mbit, or better yet 950-980), that Gilev’s advice has been followed. Then - the simplest test of operation will be, oddly enough, copying one large file (5-10 gigabytes) over the network. An indirect sign of normal operation on a 1 Gbit network will be the average copying speed of 100 MB/sec, good operation - 120 MB/sec. I would like to draw your attention to the fact that the weak point (including) may be the processor load. The SMB protocol on Linux is quite poorly parallelized, and during operation it can quite easily “eat up” one processor core and not consume any more.

And further. With default windows settings the client works best with windows server(or even windows working station) and the SMB/CIFS protocol, the linux client (debian, ubuntu did not look at the others) works better with linux and NFS (it also works with SMB, but the parrots are higher on NFS). The fact that during linear copying a Windows Linux server to NFS is copied into one stream faster does not mean anything. Debian tuning for 1C is a topic for a separate article, I’m not ready for it yet, although I can say that in the file version I got even slightly better performance than the Win version on the same equipment, but with postgres with over 50 users I still have everything very bad.

The most important thing that “burned” administrators know, but beginners do not take into account. There are many ways to set the path to the 1c database. You can do servershare, you can do 192.168.0.1share, you can net use z: 192.168.0.1share (and in some cases this method will also work, but not always) and then specify drive Z. It seems that all these paths point to the same thing the same place, but for 1C there is only one method that provides normal performance quite reliably. So, this is what you need to do correctly:

On the command line (or in policies, or whatever is convenient for you) - do net use DriveLetter: servershare. Example: net use m: serverbases. I specifically emphasize NOT the IP address, but the server name. If the server name is not visible, add it to the dns on the server, or locally to the hosts file. But the address must be by name. Accordingly, on the way to the database, access this disk (see picture).

And now I will show with numbers why this is the advice. Initial data: Intel X520-DA2, Intel 362, Intel 350, Realtek 8169 cards. OS Win 2008 R2, Win 7, Debian 8. Latest drivers, updates applied. Before testing, I made sure that Iperf gives the full bandwidth (except for 10 Gbit cards, it only managed to squeeze out 7.2 Gbit, I’ll see why later, the test server is not yet configured properly). The disks are different, but everywhere there is an SSD (I specially inserted a single disk for testing, it is not loaded with anything else) or a raid from an SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. There was no difference between 1 Gbit copper Intel 350 and 1 Gbit optical Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is turned off (just for comparability of results, turbo boost for good results adds a little less than 10%, for bad results it may not have any effect at all). Versions 1C 8.2.19.86, 8.3.6.2076. I don’t give all the numbers, but only the most interesting ones, so that you have something to compare with.

100 Mbit CIFS

Win 2008 - Win 2008

contact by ip address

100 Mbit CIFS

Win 2008 - Win 2008

calling by name

1 Gbit CIFS

Win 2008 - Win 2008

contact by ip address

1 Gbit CIFS

Win 2008 - Win 2008

calling by name

1 Gbit CIFS

Win 2008 - Win 7

calling by name

1 Gbit CIFS

Win 2008 - Debian

calling by name

10 Gbit CIFS

Win 2008 - Win 2008

contact by ip address

10 Gbit CIFS

Win 2008 - Win 2008

calling by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
1C 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table and from personal experience. Applies only to the file version):

  • Over the network, you can get quite normal numbers for work if this network is properly configured and the path is entered correctly in 1C. Even the first Core i3 can easily produce 40+ parrots, which is quite good, and these are not only parrots, in real work the difference is also noticeable. But! The limitation when working with several (more than 10) users will no longer be the network, here 1 Gbit is still enough, but blocking during multi-user work (Gilev).
  • the 1C 8.3 platform is many times more demanding in terms of proper network configuration. Basic settings - see Gilev, but keep in mind that everything can be influenced. I saw an acceleration from uninstalling (and not just turning off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but Microsoft certified version (especially for cheap cards like ASUS and DLC), from removing the second network card from the server . There are a lot of options, set up your network carefully. There may well be a situation where platform 8.2 gives acceptable numbers, and 8.3 - two or even more times less. Try playing with platform versions 8.3, sometimes you get a very big effect.
  • 1C 8.3.6.2076 (maybe later, I haven’t looked for the exact version yet) is still easier to configure over the network than 8.3.7.2008. I was able to achieve normal operation over the network from 8.3.7.2008 (in comparable parrots) only a few times; I could not repeat it for a more general case. I didn’t understand much, but judging by the foot wraps from Process Explorer, the recording there is not as good as in 8.3.6.
  • Despite the fact that when working on a 100 Mbit network, its load schedule is small (we can say that the network is free), the operating speed is still much less than on 1 Gbit. The reason is network latency.
  • All other things being equal (a well-functioning network) for 1C 8.2 the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek can generally give sharp subsidence out of the blue. Therefore, if you have money, it’s better to keep Intel network cards everywhere; if you don’t have money, then install Intel only on the server (your CO). And there are many times more instructions for tuning Intel network cards.
  • Default antivirus settings (using drweb version 10 as an example) take up about 8-10% of parrots. If you configure it as it should (allow the 1cv8 process to do everything, although it is not safe), the speed is the same as without an antivirus.
  • Do NOT read Linux gurus. A server with samba is great and free, but if you install Win XP or Win7 (or even better - server OS) on the server, then the file version of 1c will work faster. Yes, samba and the protocol stack and network settings and much, much more can be well tuned in debian/ubuntu, but this is recommended for specialists. There is no point in installing Linux with default settings and then saying that it is slow.
  • It's quite a good idea to check the operation of disks connected via net use using fio . At least it will be clear whether these are problems with the 1C platform, or with the network/disk.
  • For the single-user version, I can’t think of tests (or a situation) where the difference between 1 Gbit and 10 Gbit would be visible. The only thing where 10Gbit for the file version gave better results is connecting disks via iSCSI, but this is a topic for a separate article. Still, I think that for the file version 1 Gbit cards are enough.
  • I don’t understand why, with a 100 Mbit network, 8.3 works noticeably faster than 8.2, but it was a fact. All other equipment, all other settings are absolutely the same, it’s just that in one case 8.2 is tested, and in the other - 8.3.
  • Non-tuned NFS win-win or win-lin gives 6 parrots, I did not include them in the table. After tuning I got 25, but it was unstable (the difference in measurements was more than 2 units). I can’t yet give recommendations on using Windows and the NFS protocol.
After all the settings and checks, we run the test again from the client computer and rejoice at the improved result (if it works). If the result has improved, there are more than 30 parrots (and especially more than 40), fewer than 10 users are working at the same time, and the working database is still slow - almost certainly a problem with the programmer (or you have already reached the peak capabilities of the file version).

Terminal server. (the database is on the server, clients connect via the network, RDP protocol). Algorithm step by step:

  • We add Gilev’s test database to the server in the same folder as the main databases. We connect from the same server and run the test. We remember the result.
  • In exactly the same way as in the file version, we configure the processor. In the case of a terminal server, the processor generally plays the main role (it is assumed that there are no explicit weak points, such as lack of memory or a huge amount of unnecessary software).
  • Configuring network cards in the case of a terminal server has virtually no effect on the operation of 1c. To ensure “special” comfort, if your server produces more than 50 parrots, you can play with new versions of the RDP protocol, just for the comfort of users, faster response and scrolling.
  • When a large number of users are actively working (and here you can already try connecting 30 people to one database, if you try), it is very advisable to install an SSD drive. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are carried out with the controller cache enabled for writing, which is incorrect. The test base is small, it fits quite well in the cache, hence the high numbers. On real (large) databases everything will be completely different, so the cache is disabled for tests.
For example, I checked the operation of the Gilev test with different disk options. I installed the discs from what was at hand, just to show the tendency. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost version 8.3.6 produces 56.18 and 8.3.7.2008 produces 55.56, in other tests the difference is even smaller). Power consumption - maximum performance, turbo boost disabled (unless otherwise stated).
Raid 10 4x SATA 7200

ATA ST31500341AS

Raid 10 4x SAS 10kRaid 10 4x SAS 15kSingle SSDRamdiskRamdiskCache enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
1C 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18
  • The enabled RAID controller cache eliminates all the differences between the disks; the numbers are the same for both sat and cas. Testing with it on a small amount of data is useless and is not indicative of any kind.
  • For platform 8.2, the difference in performance between SATA and SSD options is more than double. This is not a typo. If you look at the performance monitor during the test on SATA drives. then you can clearly see “Active disk operating time (in%)” 80-95. Yes, if you enable the cache of the disks themselves for recording, the speed will increase to 35, if you enable the cache of the raid controller - up to 49 (regardless of which disks are being tested at the moment). But these are synthetic cache parrots; in real work, with large databases, there will never be a 100% write cache hit ratio.
  • The speed of even cheap SSDs (I tested on Agility 3) is quite enough to run the file version. The recording resource is another matter, you need to look at it in each specific case, it is clear that the Intel 3700 will have it an order of magnitude higher, but the price is corresponding. And yes, I understand that when testing SSD drive I am also testing the cache of this disk to a greater extent, the real results will be less.
  • The most correct (from my point of view) solution would be to allocate 2 SSD disks in a mirrored raid for a file database (or several file databases), and not place anything else there. Yes, with a mirror, SSDs wear out equally, and this is a minus, but at least the controller electronics are somehow protected from errors.
  • The main advantages of SSD drives for the file version will appear when there are many databases, each with several users. If there are 1-2 databases, and there are about 10 users, then SAS disks will be enough. (but in any case, look at loading these disks, at least through perfmon).
  • The main advantages of a terminal server are that it can have very weak clients, and the network settings affect the terminal server much less (again, your K.O.).
Conclusions: if you run the Gilev test on a terminal server (from the same disk where the working databases are located) and at those moments when the working database slows down, and the Gilev test shows a good result (above 30), then the slow operation of the main working database is to blame most likely a programmer.

If Gilev’s test shows small numbers, and you have a high-clock processor and fast disks, then the administrator needs to take at least perfmon, recording all the results somewhere, and watch, observe, and draw conclusions. There will be no definitive advice.

Client-server option.

Tests were carried out only on 8.2, because on 8.3 everything depends quite seriously on the version.

For testing, I chose different server options and networks between them to show the main trends.

1C: Xeon 5520

SQL: Xeon E5-2630

1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel - SSD

1C: Xeon 5520

SQL: Xeon E5-2630

Fiber channel - SAS

1C: Xeon 5650

SQL: Xeon E5-2630

1C: Xeon 5650

SQL: Xeon E5-2630

Fiber channel - SSD

1C: Xeon 5650

SQL: Xeon E5-2630

1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =1C: Xeon 5650 =
16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
1C 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I have considered all the interesting options, if there is anything else you are interested in, write in the comments, I will try to do it.

  • SAS on storage systems is slower than local SSDs, even though the storage systems have larger cache sizes. SSDs, both local and on storage systems, work at comparable speeds for Gilev’s test. I don’t know any standard multi-threaded test (not just recording, but all equipment) except for the 1C load test from the MCC.
  • Changing the 1C server from 5520 to 5650 almost doubled the performance. Yes, the server configurations do not completely match, but it shows a trend (no surprise).
  • Increasing the frequency on the SQL server certainly gives an effect, but not the same as on the 1C server; MS SQL server is excellent (if you ask it) to use multi-cores and free memory.
  • Changing the network between 1C and SQL from 1 Gbit to 10 Gbit gives approximately 10% parrots. I expected more.
  • Enabling Shared memory still gives an effect, although not 15%, as described in the article. Be sure to do it, fortunately it’s quick and easy. If during installation someone gave the SQL server a named instance, then for 1C to work, the server name must be specified not by FQDN (tcp/ip will work), not through localhost or just ServerName, but through ServerNameInstanceName, for example zz-testzztest. (Otherwise there will be a DBMS error: Microsoft SQL Server Native Client 10.0: Shared Memory Provider: The shared memory library used to establish a connection with SQL Server 2000 was not found. HRESULT=80004005, HRESULT=80004005, HRESULT=80004005, SQLSrvr: SQLSTATE=08001, state=1, Severity=10, native=126, line=0).
  • For users less than 100, the only point in splitting it into two separate servers is a Win 2008 Std (and older) license, which only supports 32GB of RAM. In all other cases, 1C and SQL definitely need to be installed on one server and given more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unjustified greed (if you think that you have enough memory for it and everything works fine, maybe the file version of 1C would be enough for you?)
  • How worse the combination of 1C and SQL works in a virtual machine is the topic of a separate article (hint - noticeably worse). Even in Hyper-V everything is not so clear...
  • Balanced performance mode is bad. The results are quite consistent with the file version.
  • Many sources say that debugging mode (ragent.exe -debug) causes a significant decrease in performance. Well, it reduces, yes, but I wouldn’t call 2-3% a significant effect.
There will be the least amount of advice here for a specific case, because... The brakes in the client-server version of work are the most difficult case, and everything is configured very individually. The easiest way is to say that for normal operation you need to take a separate server ONLY for 1C and MS SQL, put there processors with the maximum frequency (above 3 GHz), SSD drives for the database, and more memory (128+), do not use virtualization. It helped - great, you are lucky (and there will be a lot of such lucky ones, more than half of the problems can be solved with an adequate upgrade). If not, then any other options require separate consideration and settings.

We often receive questions about what slows down 1c, especially when switching to version 1c 8.3, thanks to our colleagues from Interface LLC, we tell you in detail:

In our previous publications, we already touched on the impact of disk subsystem performance on the speed of 1C, but this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently avoided; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments given are “iron”: “Accounting 2.0 just flew, but the “troika” barely moved,” of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, giving them 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to approximately an average office machine. The server was placed on a RAID 0 array of two WD Se, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases for any DBMS administrator. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens when you launch a 1C file database over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take a significant amount of time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the largest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for the new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We discussed the influence of SSD on the speed of operation of locally installed 1C in the previous material; much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but in itself cannot cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Dumped into disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk memory.

Where it leads? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. In real life, on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database, plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease in productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible advantages. 1C Enterprise can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations of hard drives can be assessed from the following material: Review of two inexpensive Western Digital Blue series drives 500 GB and 1 TB.

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance unless this PC is used for heavy operations: group processing, heavy reports, month-end closing, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

The main purpose of writing this article is to avoid repeating obvious nuances for those administrators (and programmers) who have not yet gained experience with 1C.

The secondary goal is that if I have any shortcomings, Infostart will be the quickest to point this out to me.

V. Gilev’s test has already become a kind of “de facto” standard. The author on his website gave quite clear recommendations, but I will simply present some results and comment on the most likely errors. Naturally, the test results on your equipment may differ; this is just a guide for what should be and what you can strive for. I would like to note right away that changes must be made step by step, and after each step, check what result it gave.

There are similar articles on Infostart, I will put links to them in the relevant sections (if I miss something, please suggest me in the comments, I will add it). So, let’s assume your 1C is slow. How to diagnose the problem, and how to understand who is to blame, the administrator or the programmer?

Initial data:

Tested computer, main guinea pig: HP DL180G6, equipped with 2*Xeon 5650, 32 Gb, Intel 362i, Win 2008 r2. For comparison, the Core i3-2100 shows comparable results in the single-threaded test. The equipment I deliberately chose was not the newest; with modern equipment the results are noticeably better.

For testing separate 1C and SQL servers, SQL server: IBM System 3650 x4, 2*Xeon E5-2630, 32 Gb, Intel 350, Win 2008 r2.

To test a 10 Gbit network, Intel 520-DA2 adapters were used.

File version. (the database is on the server in a shared folder, clients connect via the network, CIFS/SMB protocol). Algorithm step by step:

0. Add Gilev’s test database to the file server in the same folder as the main databases. We connect from the client computer and run the test. We remember the result.

It is understood that even for old computers 10 years ago (Pentium on 775 socket ) the time from clicking on the 1C:Enterprise shortcut to the appearance of the database window should pass less than a minute. ( Celeron = slow).

If you have a computer worse than a Pentium 775 socket with 1 GB of RAM, then I sympathize with you, and it will be difficult for you to achieve comfortable work on 1C 8.2 in the file version. Think about either upgrading (it's high time) or switching to a terminal (or web, in the case of thin clients and managed forms) server.

If the computer is no worse, then you can kick the administrator. At a minimum, check the operation of the network, antivirus and HASP protection driver.

If Gilev’s test at this stage showed 30 “parrots” or higher, but the 1C working base still works slowly, the questions should be directed to the programmer.

1. As a guide to how much a client computer can “squeeze”, we check the operation of only this computer, without a network. We install the test database on a local computer (on a very fast disk). If the client computer does not have a normal SSD, then a ramdisk is created. For now, the simplest and free one is Ramdisk enterprise.

To test version 8.2, a 256 MB ramdisk is enough, and! The most important. After rebooting the computer, with the ramdisk running, there should be 100-200 MB free on it. Accordingly, without a ramdisk, for normal operation there should be 300-400 MB of free memory.

To test version 8.3, a 256 MB ramdisk is enough, but you need more free RAM.

When testing, you need to look at the processor load. In a case close to ideal (ramdisk), local file 1c loads 1 processor core when running. Accordingly, if during testing your processor core is not fully loaded, look for weak points. A little emotional, but generally correct, the influence of the processor on the operation of 1C is described. Just for reference, even on modern Core i3s with high frequencies, the numbers 70-80 are quite realistic.

The most common errors at this stage.

a) Incorrectly configured antivirus. There are many antiviruses, the settings for each are different, I will only say that with proper configuration, neither the web nor Kaspersky 1C interferes. With the default settings, approximately 3-5 parrots (10-15%) can be taken away.

b) Performance mode. For some reason, few people pay attention to this, but the effect is the most significant. If you need speed, then you must do this, both on client and server computers. (Gilev has a good description. The only caveat is that on some motherboards, if you turn off Intel SpeedStep, you cannot turn on TurboBoost).

In short, while 1C is running, there is a lot of waiting for a response from other devices (disk, network, etc.). While waiting for a response, if the performance mode is enabled, the processor lowers its frequency. A response comes from the device, 1C (the processor) needs to work, but the first clock cycles are at a reduced frequency, then the frequency increases - and 1C again waits for a response from the device. And so - many hundreds of times per second.

You can (and preferably) enable performance mode in two places:

Via BIOS. Disable modes C1, C1E, Intel C-state (C2, C3, C4). In different bios they are called differently, but the meaning is the same. It takes a long time to search, a reboot is required, but if you do it once, then you can forget it. If you do everything correctly in the BIOS, the speed will increase. On some motherboards, you can configure the BIOS settings so that Windows performance mode will not play a role. (Examples of BIOS settings from Gilev). These settings mainly concern server processors or “advanced” BIOSes, if you haven’t found this and you DO NOT have Xeon, that’s okay.

Control panel - Power supply - High performance. Minus - if the computer has not been serviced for a long time, it will make a louder fan noise, heat up more and consume more energy. This is a performance fee.

How to check that the mode is enabled. Launch the task manager - performance - resource monitor - CPU. We wait until the processor is busy with nothing.

These are the default settings.

In BIOS C-state included,

balanced power consumption mode


In BIOS C-state included, high performance mode

For Pentium and Core you can stop there,

You can still squeeze a little "parrots" out of Xeon


In BIOS C-state turned off, high performance mode.

If you don't use Turbo boost, this is what it should look like

server tuned for performance


And now the numbers. Let me remind you: Intel Xeon 5650, ramdisk. In the first case, the test shows 23.26, in the last - 49.5. The difference is almost twofold. The numbers may vary, but the ratio remains essentially the same for Intel Core.

Dear administrators, you can criticize 1C as much as you like, but if end users need speed, you need to enable high performance mode.

c) Turbo Boost. First you need to understand whether your processor supports this function, for example. If it supports, then you can still quite legally get some performance. (I don’t want to touch on the issues of frequency overclocking, especially servers, do it at your own peril and risk. But I agree that increasing Bus speed from 133 to 166 gives a very noticeable increase in both speed and heat dissipation)

How to turn on turbo boost is written, for example, . But! For 1C there are some nuances (not the most obvious). The difficulty is that the maximum effect of turbo boost occurs when C-state is turned on. And we get something like this:

Please note that the multiplier is the maximum, the Core speed is beautiful, and the performance is high. But what will happen as a result with 1s?

Factor

Core speed (frequency), GHz

CPU-Z Single Thread

Gilev Ramdisk test

file version

Gilev Ramdisk test

client-server

Without Turbo boost

C-state off, Turbo boost

53.19

40,32

C-state on, Turbo boost

1080

53,13

23,04

But in the end it turns out that according to CPU performance tests the version with a multiplier of 23 is ahead, according to Gilev’s tests in the file version the performance with a multiplier of 22 and 23 is the same, but in the client-server version - the version with a multiplier of 23 is terrible terrible terrible (even if C -state set to level 7, it is still slower than with C-state turned off). Therefore, the recommendation is to check both options for yourself and choose the best one. In any case, the difference between 49.5 and 53 parrots is quite significant, especially without much effort.

Conclusion - turbo boost must be turned on. Let me remind you that it is not enough to enable the Turbo boost item in the BIOS, you also need to look at other settings (BIOS: QPI L0s, L1 - disable, demand scrubbing - disable, Intel SpeedStep - enable, Turbo boost - enable. Control Panel - Power Options - High Performance) . And I would still (even for the file version) choose the option where c-state is turned off, even though the multiplier is smaller. It will turn out something like this...

A rather controversial point is the memory frequency. For example, memory frequency is shown to have a very strong influence. My tests did not reveal such a dependence. I will not compare DDR 2/3/4, I will show the results of changing the frequency within the same line. The memory is the same, but in the BIOS we are forced to set lower frequencies.




And test results. 1C 8.2.19.83, for the file version local ramdisk, for client-server 1C and SQL on one computer, Shared memory. Turbo boost is disabled in both versions. 8.3 shows comparable results.

The difference is within the measurement error. I specifically pulled out screenshots of CPU-Z to show that with a change in frequency, other parameters also change, the same CAS Latency and RAS to CAS Delay, which neutralizes the change in frequency. The difference will be when the memory modules are physically changed, from slower to faster, but even there the numbers are not particularly significant.

2. When we have sorted out the processor and memory of the client computer, we move on to the next very important place - the network. Many volumes of books have been written about network tuning, there are articles on Infostart (, and others), but here I will not focus on this topic. Before starting testing 1C, please make sure that iperf between two computers shows the entire bandwidth (for 1 Gbit cards - well, at least 850 Mbit, or better yet 950-980), that Gilev’s advice has been followed. Then - the simplest test of operation will be, oddly enough, copying one large file (5-10 gigabytes) over the network. An indirect sign of normal operation on a 1 Gbit network will be the average copying speed of 100 MB/sec, good operation - 120 MB/sec. I would like to draw your attention to the fact that the weak point (including) may be the processor load. SMB The protocol on Linux is quite poorly parallelized, and during operation it can quite easily “eat up” one processor core and not consume any more.

And further. With the default settings, the windows client works best with a windows server (or even a windows workstation) and the SMB/CIFS protocol, a linux client (debian, ubuntu didn’t look at the others) works better with linux and NFS (it also works with SMB, but on NFS parrots are taller). The fact that during linear copying a Windows Linux server to NFS is copied into one stream faster does not mean anything. Debian tuning for 1C is a topic for a separate article, I’m not ready for it yet, although I can say that in the file version I got even slightly better performance than the Win version on the same equipment, but with postgres with over 50 users I still have everything very bad.

The most important , which “burned” administrators know, but beginners do not take into account. There are many ways to set the path to the 1c database. You can do \\server\share, you can do \\192.168.0.1\share, you can net use z: \\192.168.0.1\share (and in some cases this method will also work, but not always) and then specify the Z drive. It seems that all these paths point to the same place, but for 1C there is only one way that provides normal performance quite reliably. So, this is what you need to do correctly:

On the command line (or in policies, or whatever is convenient for you) - do net use DriveLetter: \\server\share. Example: net use m: \\server\bases. I specifically emphasize NOT the IP address, namely Name server. If the server name is not visible, add it to the dns on the server, or locally to the hosts file. But the address must be by name. Accordingly, on the way to the database, access this disk (see picture).

And now I will show with numbers why this is the advice. Initial data: Intel X520-DA2, Intel 362, Intel 350, Realtek 8169 cards. OS Win 2008 R2, Win 7, Debian 8. Latest drivers, updates applied. Before testing, I made sure that Iperf gives the full bandwidth (except for 10 Gbit cards, it only managed to squeeze out 7.2 Gbit, I’ll see why later, the test server is not yet configured properly). The disks are different, but everywhere there is an SSD (I specially inserted a single disk for testing, it is not loaded with anything else) or a raid from an SSD. The speed of 100 Mbit was obtained by limiting the settings of the Intel 362 adapter. There was no difference between 1 Gbit copper Intel 350 and 1 Gbit optical Intel X520-DA2 (obtained by limiting the speed of the adapter). Maximum performance, turbo boost is turned off (just for comparability of results, turbo boost for good results adds a little less than 10%, for bad results it may not have any effect at all). Versions 1C 8.2.19.86, 8.3.6.2076. I don’t give all the numbers, but only the most interesting ones, so that you have something to compare with.

Win 2008 - Win 2008

contact by ip address

Win 2008 - Win 2008

Calling by name

Win 2008 - Win 2008

Contact by IP address

Win 2008 - Win 2008

Calling by name

Win 2008 - Win 7

Calling by name

Win 2008 - Debian

Calling by name

Win 2008 - Win 2008

Contact by IP address

Win 2008 - Win 2008

Calling by name

11,20 26,18 15,20 43,86 40,65 37,04 16,23 44,64
1C 8.2 11,29 26,18 15,29 43,10 40,65 36,76 15,11 44,10
8.2.19.83 12,15 25,77 15,15 43,10 14,97 42,74
6,13 34,25 14,98 43,10 39,37 37,59 15,53 42,74
1C 8.3 6,61 33,33 15,58 43,86 40,00 37,88 16,23 42,74
8.3.6.2076 33,78 15,53 43,48 39,37 37,59 42,74

Conclusions (from the table and from personal experience. Applies only to the file version):

Over the network, you can get quite normal numbers for work if this network is properly configured and the path is entered correctly in 1C. Even the first Core i3 can easily produce 40+ parrots, which is quite good, and these are not only parrots, in real work the difference is also noticeable. But! The limitation when working with several (more than 10) users will no longer be the network, here 1 Gbit is still enough, but blocking during multi-user work (Gilev).

The 1C 8.3 platform is many times more demanding in terms of proper network configuration. Basic settings - see Gilev, but keep in mind that everything can be influenced. I saw an acceleration from uninstalling (and not just turning off) the antivirus, from removing protocols like FCoE, from changing drivers to an older, but Microsoft certified version (especially for cheap cards like ASUS and DLC), from removing the second network card from the server . There are a lot of options, set up your network carefully. There may well be a situation where platform 8.2 gives acceptable numbers, and 8.3 - two or even more times less. Try playing with platform versions 8.3, sometimes you get a very big effect.

1C 8.3.6.2076 (maybe later, I haven’t looked for the exact version yet) is still easier to configure over the network than 8.3.7.2008. I was able to achieve normal operation over the network from 8.3.7.2008 (in comparable parrots) only a few times; I could not repeat it for a more general case. I didn’t understand much, but judging by the foot wraps from Process Explorer, the recording there is not as good as in 8.3.6.

Despite the fact that when working on a 100 Mbit network, its load schedule is small (we can say that the network is free), the operating speed is still much less than on 1 Gbit. The reason is network latency.

All other things being equal (a well-functioning network) for 1C 8.2 the Intel-Realtek connection is 10% slower than Intel-Intel. But realtek-realtek can generally give sharp subsidence out of the blue. Therefore, if you have money, it’s better to keep Intel network cards everywhere; if you don’t have money, then install Intel only on the server (your CO). And there are many times more instructions for tuning Intel network cards.

Default antivirus settings (using drweb version 10 as an example) take up about 8-10% of parrots. If you configure it as it should (allow the 1cv8 process to do everything, although it is not safe), the speed is the same as without an antivirus.

Do NOT read Linux gurus. A server with samba is great and free, but if you install Win XP or Win7 (or even better - server OS) on the server, then the file version of 1c will work faster. Yes, samba and the protocol stack and network settings and much, much more can be well tuned in debian/ubuntu, but this is recommended for specialists. There is no point in installing Linux with default settings and then saying that it is slow.

It's quite a good idea to check the operation of disks connected via net use using fio . At least it will be clear whether these are problems with the 1C platform, or with the network/disk.

For the single-user version, I can’t think of tests (or a situation) where the difference between 1 Gbit and 10 Gbit would be visible. The only thing where 10Gbit for the file version gave better results is connecting disks via iSCSI, but this is a topic for a separate article. Still, I think that for the file version 1 Gbit cards are enough.

I don’t understand why, with a 100 Mbit network, 8.3 works noticeably faster than 8.2, but it was a fact. All other equipment, all other settings are absolutely the same, it’s just that in one case 8.2 is tested, and in the other - 8.3.

Non-tuned NFS win-win or win-lin gives 6 parrots, I did not include them in the table. After tuning I got 25, but it was unstable (the difference in measurements was more than 2 units). I can’t yet give recommendations on using Windows and the NFS protocol.

After all the settings and checks, we run the test again from the client computer and rejoice at the improved result (if it works). If the result has improved, there are more than 30 parrots (and especially more than 40), fewer than 10 users are working at the same time, and the working database is still slow - almost certainly a problem with the programmer (or you have already reached the peak capabilities of the file version).

Terminal server. (the database is on the server, clients connect via the network, RDP protocol). Algorithm step by step:

0. Add Gilev’s test database to the server in the same folder as the main databases. We connect from the same server and run the test. We remember the result.

1. In the same way as in the file version, we set up the work. In the case of a terminal server, the processor generally plays the main role (it is assumed that there are no obvious weak points, such as lack of memory or a huge amount of unnecessary software).

2. Setting up network cards in the case of a terminal server has virtually no effect on the operation of 1c. To ensure “special” comfort, if your server produces more than 50 parrots, you can play with new versions of the RDP protocol, just for the comfort of users, faster response and scrolling.

3. If a large number of users are actively working (and here you can already try connecting 30 people to one database, if you try), it is very advisable to install an SSD drive. For some reason, it is believed that the disk does not particularly affect the operation of 1C, but all tests are carried out with the controller cache enabled for writing, which is incorrect. The test base is small, it fits quite well in the cache, hence the high numbers. On real (large) databases everything will be completely different, so the cache is disabled for tests.

For example, I checked the operation of the Gilev test with different disk options. I installed the discs from what was at hand, just to show the tendency. The difference between 8.3.6.2076 and 8.3.7.2008 is small (in the Ramdisk Turbo boost version 8.3.6 produces 56.18 and 8.3.7.2008 produces 55.56, in other tests the difference is even smaller). Power consumption - maximum performance, turbo boost disabled (unless otherwise stated).

Raid 10 4x SATA 7200

ATA ST31500341AS

Raid 10 4x SAS 10k

Raid 10 4x SAS 15k

Single SSD

Ramdisk

Cache enabled

RAID controller

21,74 28,09 32,47 49,02 50,51 53,76 49,02
1C 8.2 21,65 28,57 32,05 48,54 49,02 53,19
8.2.19.83 21,65 28,41 31,45 48,54 49,50 53,19
33,33 42,74 45,05 51,55 52,08 55,56 51,55
1C 8.3 33,46 42,02 45,05 51,02 52,08 54,95
8.3.7.2008 35,46 43,01 44,64 51,55 52,08 56,18

The enabled RAID controller cache eliminates all the differences between the disks; the numbers are the same for both sat and cas. Testing with it on a small amount of data is useless and is not indicative of any kind.

For platform 8.2, the difference in performance between SATA and SSD options is more than double. This is not a typo. If you look at the performance monitor during the test on SATA drives. then you can clearly see “Active disk operating time (in%)” 80-95. Yes, if you enable the cache of the disks themselves for recording, the speed will increase to 35, if you enable the cache of the raid controller - up to 49 (regardless of which disks are being tested at the moment). But these are synthetic cache parrots; in real work, with large databases, there will never be a 100% write cache hit ratio.

The speed of even cheap SSDs (I tested on Agility 3) is quite enough to run the file version. The recording resource is another matter, you need to look at it in each specific case, it is clear that the Intel 3700 will have it an order of magnitude higher, but the price is corresponding. And yes, I understand that when testing an SSD disk, I also test the cache of this disk to a greater extent, the real results will be less.

The most correct (from my point of view) solution would be to allocate 2 SSD disks in a mirrored raid for a file database (or several file databases), and not place anything else there. Yes, with a mirror, SSDs wear out equally, and this is a minus, but at least the controller electronics are somehow protected from errors.

The main advantages of SSD drives for the file version will appear when there are many databases, each with several users. If there are 1-2 databases, and there are about 10 users, then SAS disks will be enough. (but in any case, look at loading these disks, at least through perfmon).

The main advantages of a terminal server are that it can have very weak clients, and the network settings affect the terminal server much less (again, your K.O.).

Conclusions: if you run the Gilev test on a terminal server (from the same disk where the working databases are located) and at those moments when the working database slows down, and the Gilev test shows a good result (above 30), then the slow operation of the main working database is to blame most likely a programmer.

If Gilev’s test shows small numbers, and you have a high-clock processor and fast disks, then the administrator needs to take at least perfmon, recording all the results somewhere, and watch, observe, and draw conclusions. There will be no definitive advice.

Client-server option.

Tests were carried out only on 8.2, because on 8.3 everything depends quite seriously on the version.

For testing, I chose different server options and networks between them to show the main trends.

SQL: Xeon E5-2630

SQL: Xeon E5-2630

Fiber channel - SSD

SQL: Xeon E5-2630

Fiber channel - SAS

SQL: Xeon E5-2630

Local SSD

SQL: Xeon E5-2630

Fiber channel - SSD

SQL: Xeon E5-2630

Local SSD

1C: Xeon 5650 =

1C: Xeon 5650 =

Shared memory

1C: Xeon 5650 =

1C: Xeon 5650 =

1C: Xeon 5650 =

16,78 18,23 16,84 28,57 27,78 32,05 34,72 36,50 23,26 40,65 39.37
1C 8.2 17,12 17,06 14,53 29,41 28,41 31,45 34,97 36,23 23,81 40,32 39.06
16,72 16,89 13,44 29,76 28,57 32,05 34,97 36,23 23,26 40,32 39.06

It seems that I have considered all the interesting options, if there is anything else you are interested in, write in the comments, I will try to do it.

SAS on storage systems is slower than local SSDs, even though the storage systems have larger cache sizes. SSDs, both local and on storage systems, work at comparable speeds for Gilev’s test. I don’t know any standard multi-threaded test (not just recording, but all equipment) except for the 1C load test from the MCC.

Changing the 1C server from 5520 to 5650 almost doubled the performance. Yes, the server configurations do not completely match, but it shows a trend (no surprise).

Increasing the frequency on the SQL server certainly gives an effect, but not the same as on the 1C server; MS SQL server is excellent (if you ask it) to use multi-cores and free memory.

Changing the network between 1C and SQL from 1 Gbit to 10 Gbit gives approximately 10% parrots. I expected more.

Enabling Shared memory still gives an effect, although not 15%, as described. Be sure to do it, fortunately it’s quick and easy. If during installation someone gave the SQL server a named instance, then for 1C to work, the server name must be specified not by FQDN (tcp/ip will work), not through localhost or just ServerName, but through ServerName\InstanceName, for example zz-test\zztest. (Otherwise there will be a DBMS error: Microsoft SQL Server Native Client 10.0: Shared Memory Provider: The shared memory library used to establish a connection with SQL Server 2000 was not found. HRESULT=80004005, HRESULT=80004005, HRESULT=80004005, SQLSrvr: SQLSTATE=08001, state=1, Severity=10, native=126, line=0).

For less than 100 users, the only point in splitting it into two separate servers is a Win 2008 Std (and older) license, which only supports 32GB of RAM. In all other cases, 1C and SQL definitely need to be installed on one server and given more (at least 64 GB) memory. Giving MS SQL less than 24-28 GB of RAM is unjustified greed (if you think that you have enough memory for it and everything works fine, maybe the file version of 1C would be enough for you?)

How worse the combination of 1C and SQL works in a virtual machine is the topic of a separate article (hint - noticeably worse). Even in Hyper-V everything is not so clear...

Balanced performance mode is bad. The results are quite consistent with the file version.

Many sources say that debugging mode (ragent.exe -debug) causes a significant decrease in performance. Well, it reduces, yes, but I wouldn’t call 2-3% a significant effect.

If you find an error, please select a piece of text and press Ctrl+Enter.